While reading articles over the weekend about Nginx, I found a few of
the typical Nginx vs Apache benchmarks.
In every benchmark, someone always showed up to defend Apache’s
appalling performance, and claim that Apache has a ton of modules
enabled by default in it’s configuration file, which should be removed
for a production system regardless.
That’s a fine argument, and seems very likely true (but then it should
be documented in the config file exactly which module is needed for
what), but my question is, doesn’t Nginx have a number of modules
compiled in for extra functionality? At least, in a system binary on
Ubuntu for instance?
I installed Nginx from Apt on Ubuntu, so that is a pre-compiled binary
(I assume with all features enabled), whereas I compiled Nginx for
Centos, without multiple modules, such as Server Side Includes.
So it seems to stand to reason that Nginx is typically being benchmarked
with a variety of modules enabled by default as well. Granted, not as
numerous as Apache, but still multiple modules, and is not running at
it’s own peak efficiency either in most benchmarks.
To be rid of the Nginx detractors, it would be nice to see a benchmark
with Nginx & only X list of modules, and Apache with X list of modules,
to produce roughly the same functionality (ability) between each
installation, and then benchmark each.
I should think that would remove anyone’s arguments
Eventual goal of every test is to show tool making a required job as
fast as possible.
Typical webserver jobs are not limited to serving HTML and images, it’s
often compressing, proxying, queuing connections and sockets,
subprocesses and filehandles. Configuration and mass of code to run
definitely adds to that for sure but it’s not a point for comparison.
Most benchmarks that I’ve seen test many things at once that eventually
leads to a single digit: “X req/s”. This approach is wrong and your idea
about “compiling with same modules” (which are not same for Apache and
nginx) is not making it better.
Here is how I’d offer to approach to testing any production system you
use httperf results collected via some runner (e.g autobench). You
may use tool like Tsung if you need to know more about your server
decide on your infrastructure, how things should be connected which
frontend works where and so on.
choose your starting point (for me it’s serving 50K static file from
start slowly changing configuration and test objective, compare
results to your starting point and record %% of impact to benchmarks
do same slow changes in tests to all your system components -
application servers, backends, proxies, databases, frontends, firewalls
on and off and so on. As much as you can. Forget the absolute digits,
start thinking in %% of improvement from one case to another.
By getting the whole picture you may get more data to make decisions.
For sure, in average cases (dedicated server for small company that uses
PHP backend and has 10000 page hits in a day) there is no point to do
tests like I described. In my opinion they should choose most convenient
way to set up their servers, thinking about ease of changing sysadmin
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.