Benchmarking Nginx on a prehistoric PIII 500

On Thursday 13 March 2008 23:31:27 Cliff W. wrote:

people utilizing VPS hosting would benefit from a lightweight HTTP
server makes this particular benchmark quite useful.

If that was the intent, it should be reflected in the title, because now
it’s
outright misleading. Also the version of VMWare is essential.

I really think the environment itself is not the issue. It was testet on
VMWare because it was easier to set-up at the moment. All testing
servers
had the same conditions, and adding the benchmarking tool to the whole
mess
did only create more realistic conditions (there is usually a few other
services running on the VPS)

Yes, I could do the tests again, removing all disturbing factors
(another
machine for the benchmarking tool, dedicated box for both of them), but
this
would only create a perfect conditions for all of the tested software
and
I’m positive that this would not do anything else than raise the final
numbers by same factor for all.

And I think I did a much better job here then most of the developers of
the
testes software (by the number different of servers used, specifying
what
hardware I’ve used and most finally providing the config files…)

About the version: VMWare Workstation 6.0.1 build-55017.


Denis

On Fri, Mar 14, 2008 at 4:51 AM, Denis S. Filimonov
[email protected]

On Friday 14 March 2008 00:59:59 Denis Arh wrote:

numbers by same factor for all.

I seriously doubt that VMWare (or any other hypervisor) scales
performance
down uniformly, its influence is far more complex. And regardless
whether
I’ll come out right or wrong, a good test must not leave doubts.

And I think I did a much better job here then most of the developers of the
testes software (by the number different of servers used, specifying what
hardware I’ve used and most finally providing the config files…)

By no means I want to discourage you, I really appreciate what you are
doing,
but there’s still a long way before you get to the bottom of it. And
when you
do it’ll be a really useful resource for everyone.

On Fri, 2008-03-14 at 05:59 +0100, Denis Arh wrote:

All testing servers had the same conditions, and adding the
benchmarking tool to the whole mess did only create more realistic
conditions (there is usually a few other services running on the VPS)

A “few other services” rarely use 1000 connections and half the
available CPU. Not a realistic environment at all.

You are also failing to account for the fact that the load the client
places on the hardware and OS will be different for each HTTP server
being tested. Think about it: the performance of the client is directly
tied to the performance of the server… a server that is faster will
also make the client faster, the faster the client, the more CPU it
hogs. The net result is that the fastest server software seems likely
to take the biggest performance hit by having it’s resources consumed by
the client. This would show up as a damping effect, bringing all the
results closer together than they really are.

Yes, I could do the tests again, removing all disturbing factors
(another machine for the benchmarking tool, dedicated box for both of
them), but this would only create a perfect conditions for all of the
tested software and I’m positive that this would not do anything else
than raise the final numbers by same factor for all.

Your certainty only assures me you haven’t thought much about
benchmarking =)

It isn’t about creating “perfect conditions” (you can’t) so much as
removing glaring flaws.

And I think I did a much better job here then most of the developers
of the testes software (by the number different of servers used,
specifying what hardware I’ve used and most finally providing the
config files…)

Being detailed about a flawed test does not make it any less flawed.

Your approach is a bit like testing the relative speed of a Jeep and a
Ferrari in the snow. Just because the conditions are the same for both
vehicles doesn’t really tell you anything about the actual relative
performance of each.

Not trying to be negative, just trying to avoid having one more badly
misleading benchmark published (yes, I know that all benchmarks are
misleading to some degree, but we can at least try).

Regards,
Cliff

On Fri, 2008-03-14 at 05:59 +0100, Denis Arh wrote:

I really think the environment itself is not the issue. It was testet
on VMWare because it was easier to set-up at the moment. All testing
servers had the same conditions, and adding the benchmarking tool to
the whole mess did only create more realistic conditions (there is
usually a few other services running on the VPS)

I feel I need to make this clearer: your environment is not the same
for each server. If Nginx is serving pages faster than Apache, then the
client will in turn run faster to keep up. If the client runs faster,
then it will consume more CPU, leaving less for the server to run on.
This means that Nginx would get less CPU time than Apache. Does that
help clarify why this isn’t a fair test?

I’d also point out that CPU isn’t the only resource adversely affected
here. There’s also tons of work that the host OS (not just the VPS)
must do to run both processes (context-switching, memory allocation,
I/O, network stack, etc). These are affected in much the same way as
the CPU (client consumes more, less is available for the VPS and HTTP
server).

Regards,
Cliff

Yes, I could do the tests again, removing all disturbing factors (another
machine for the benchmarking tool, dedicated box for both of them), but
this would only create a perfect conditions for all of the tested software
and I’m positive that this would not do anything else than raise the final
numbers by same factor for all.

I seriously doubt that VMWare (or any other hypervisor) scales performance
down uniformly, its influence is far more complex. And regardless whether
I’ll come out right or wrong, a good test must not leave doubts.

Well… for now I have no other means to do the tests in any other
environment. If anyone is willing to re-run them on XEN (or anything
else for that matter) or on dedicated HW, I am looking forward to
compare the results

Hi guys,

I have spent quite a lot of time benchmarking and modifying values and
all that stuff.

What I can say, is that don’t focus too much on req/s, because that
doesn’t mean much, pay careful attention to the time taken for each
request (not the average) and also the maximum time taken, in some
cases specially with concurrency, my average was 400ms with a maximum
of 19s, and 4% of the requests were over 1s.

Now, I need your point of view in scaling my server. So it is a PIII
500. I have been adding Thin instances, and playing with the weights,
and nothing changed, I always had the same result. Actually it became
worse as I was adding more Thin instances. The cpu usage was always at
100%. I tested with Mongrel and had the same throughput as well.

With 2 Thin instances, the cpu usage was 50% / 50% and gave the same
throughput. I was expecting to see somewhere around the double…

I also noticed that I had Munin (monitoring tool) working in the
background and eating 40% of the cpu for updating graphs!!! However,
disabling munin didn’t give me more throughput.

As I was increasing the concurrency, the best result was still given
with only one Thin instance.

My final server will be more powerful than my PIII, however I guess
that the cpu usage will still be at maximum, so I don’t understand the
need to have more than one Thin/Mongrel instance running, can someone
explain me what’s happening? Are more than one Thin instance required
only when the cpu is really super powerful, and in such case, the cpu
would be sometimes waiting on Rails or MySQL to do their processing or
disk access?

A little follow-up:

I have tested PunBB (php forum) on my server using php-cgi, and the
result is 17req/s with tests with a few forums to make hits to the DB.
So Rails is not as sluggish as people claim.