Poor performance in serving of static files

Hello,

After having been hit by a traffic-wave recently, I decided to switch
hosting of all my Jetty based web-apps to Nginx, replacing Apapche2.

One of these sites is for a type foundry, and the pages it servers
consist of many separate rendered gif files that are all requested
simultaneously. When loading these pages I could not help but feel a
decrease of performance in comparison to the previous Apache based
hosting, the amount of time it took to load the whole page simply seemed
lower, page rendering less fluid. So I switched back to Apache and
indeed, the page was loading much quicker. Here a link to such an
example page (currently served by Apache again):

(Clicking on any of the words will load another page, you will see what
I mean)

A day and many trials of improving the performance later I am not a bit
further. Nothing seems to improve the situation. And given all the
praise Nginx receives for the serving of static pages and the use as a
proxy I have a hard time believing it.

I’m on Debian Squeeze and was first using a package from Debian
Backports ( http://backports.debian.org/ ). I then later tried to
compile my own version, making sure epoll is in use, with no difference.

I tried many of the tips and tricks found online, made sure the static
files are indeed served by Nginx and not Jetty, and that none of the
described pitfalls apply ( Pitfalls and Common Mistakes | NGINX ), but nothing
has changed.

Could it be that for a low traffic site, Apache could actually
outperform Nginx noticeably in such a use case?

I’d hate to give up now, as I really like Nginx’s lightness and also the
way it is configured. Below the configuration in use. I hope somebody
can help me and shed some light on what’s going wrong.

Best,

Jrg

server {
server_name lineto.com;
root /srv/www/lineto.com/htdocs;
access_log /var/log/nginx/lineto.com.access;
error_log /var/log/nginx/lineto.com.error error;

location / {
proxy_pass http://localhost:8080/lineto/;
proxy_redirect default;
include proxy_params;
}

Just pass these through to root

location /img/ {}
location /js/ {}
location /1.0/ {}

Serve files with these extensions directly from root

location ~ .html$ {}
location ~ .css$ {}
location ~ .ico$ {}
}

You paste really doesn’t contain anything interesting at all.

Here’s things that may help us help you:

  • Your entire config file
  • The output of nginx -V
  • A live website actually running on Nginx (doesn’t have to be port 80)

Posted at Nginx Forum:

You are right, so here it is:

The same site running on Nginx:

http://lineto.com:81/The+Fonts/Font+Categories/Text+Fonts/Akkurat/Normal/

nginx -V
nginx: nginx version: nginx/1.0.4
nginx: TLS SNI support enabled
nginx: configure arguments: --prefix=/etc/nginx
–conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log
–http-client-body-temp-path=/var/lib/nginx/body
–http-fastcgi-temp-path=/var/lib/nginx/fastcgi
–http-log-path=/var/log/nginx/access.log
–http-proxy-temp-path=/var/lib/nginx/proxy
–http-scgi-temp-path=/var/lib/nginx/scgi
–http-uwsgi-temp-path=/var/lib/nginx/uwsgi
–lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid
–with-debug --with-http_addition_module --with-http_dav_module
–with-http_flv_module --with-http_geoip_module
–with-http_gzip_static_module --with-http_image_filter_module
–with-http_perl_module --with-http_random_index_module
–with-http_realip_module --with-http_secure_link_module
–with-http_stub_status_module --with-http_ssl_module
–with-http_sub_module --with-http_xslt_module --with-ipv6
–with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl
–with-mail --with-mail_ssl_module
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-development-kit
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-upstream-fair
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-echo
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-lua
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-http-push
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-upload-progress
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debian/modules/nginx-secure-download

nginx.conv:

user www-data;
worker_processes 2;
pid /var/run/nginx.pid;

events {
use epoll;
worker_connections 1024;
}

http {
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;

gzip on;
gzip_disable “msie6”;

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 1100;
gzip_types text/plain text/css application/json
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;

include /etc/nginx/conf.d/.conf;
include /etc/nginx/sites-enabled/
;
}

Jrg

On Mon, Jul 18, 2011 at 07:03:03AM +0100, Jrg Lehni wrote:

You are right, so here it is:

The same site running on Nginx:

http://lineto.com:81/The+Fonts/Font+Categories/Text+Fonts/Akkurat/Normal/

Sometimes I see some delays on both URLs.
You can try to increase nginx workers priority:

worker_processes NN;
worker_priority -10; # man nice


Igor S.

I just did that, with no measurable improvement.

I am aware that delays happen on both URLs sometimes, but reloading the
full site with an emptied browser cache for example takes about one
second on Apache, and double on Nginx. And without having done any
proper benchmarking, hitting any of these rendered type pages after that
just feels much slower and less fluid on Nginx. I am really surprised!

And I am still wondering: Could it be that for a low traffic site,
Apache could actually outperform Nginx noticeably?

Jrg

PS:

The server is in Germany, so the effect might be more noticeable in
countries closer to it.

On Mon, Jul 18, 2011 at 09:45:22AM +0100, Jrg Lehni wrote:

I just did that, with no measurable improvement.

I am aware that delays happen on both URLs sometimes, but reloading the full
site with an emptied browser cache for example takes about one second on Apache,
and double on Nginx. And without having done any proper benchmarking, hitting any
of these rendered type pages after that just feels much slower and less fluid on
Nginx. I am really surprised!

And I am still wondering: Could it be that for a low traffic site, Apache could
actually outperform Nginx noticeably?

You can try to increase number of worker_processes, say to 4-8 and
turn off sendfile. Also, I can not say how theses third-party modules
may affect in perfomance:

–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-development-kit
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-upstream-fair
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-echo
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-lua
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-http-push
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-upload-progress
–add-module=/build/edmonds-nginx_1.0.4-1~bpo60+1-amd64-oC2LKI/nginx-1.0.4/debia
n/modules/nginx-secure-download


nginx mailing list
[email protected]
nginx Info Page


nginx mailing list
[email protected]
nginx Info Page


Igor S.

I’ve tried increasing worker_processes and turning off sendfile already,
no difference. The modules did not seem to affect performance, as I also
compiled a fresh Nginx from source myself and saw no difference. I then
switched back to the package provided by Debian Backports. I am now
using the package nginx-light, with still the same performance.

Hi, and thanks for offering more tips.

I just did that, here the version I am running now, with 32 workers.
Still nothing improves a bit.

Jrg

nginx: nginx version: nginx/1.0.4
nginx: built by gcc 4.4.5 (Debian 4.4.5-8)
nginx: TLS SNI support enabled
nginx: configure arguments: --conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid
–lock-path=/var/lock/nginx.lock
–http-log-path=/var/log/nginx/access.log --with-http_dav_module
–http-client-body-temp-path=/var/lib/nginx/body --with-http_ssl_module
–http-proxy-temp-path=/var/lib/nginx/proxy
–with-http_stub_status_module
–http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug
–with-http_flv_module --with-file-aio

Hello!

On Mon, Jul 18, 2011 at 09:45:22AM +0100, Jürg Lehni wrote:

site, Apache could actually outperform Nginx noticeably?
Most likely you are disk bound, thus with relatively low number of
connections having only 2 nginx workers to block on disks is much
worse than having process/thread per connection as in Apache case.
Adding more workers may help.

You may also want to try file AIO[1]. But it may actually degrade
performance as it looks like you are using Linux and under Linux
it’s not possible to use AIO without directio (i.e. filesystem cache
won’t work as a result).

[1] http://wiki.nginx.org/HttpCoreModule#aio

Note that you may need to recompile nginx with file AIO support,
i.e. use “./configure --with-file-aio”.

Maxim D.

That’s one of the first things I tried during my quest for a solution.
It might improve a bit, but still quite noticeably worse than Apache.

Jrg

On Mon, Jul 18, 2011 at 10:22:05AM +0100, Jrg Lehni wrote:

Hi, and thanks for offering more tips.

I just did that, here the version I am running now, with 32 workers. Still
nothing improves a bit.

Try to disable keepalive:

keepalive_timeout 0;


Igor S.

This is very interesting! I just tried out Chrome and Firefox and I
don’t see these differences in performance. It’s only on Safari. On
Chrome, Nginx seems to have an edge even. How is this possible?

Thanks,

Jrg

On 18/07/2011 07:03, Jrg Lehni wrote:

You are right, so here it is:

The same site running on Nginx:

http://lineto.com:81/The+Fonts/Font+Categories/Text+Fonts/Akkurat/Normal/

Using Google Chrome, from London, UK. I don’t see any noticable
difference between the two sites, tested as of a few mins before this
email sent?

For performance tuning I find that either Chrome or Firefox with the
firebug plugin allows you to see the timings of all assets being loaded.
Delays are very easy to see and debug. Note that Chrome seems vastly
faster and more efficient at loading massive quantities of image assets,
so be sure to test with both

I think next use some kind of benchmarking system to that you can test
say single assets repeatedly pulled and multiple assets pulled in
parallel. This should help you figure out where the bottleneck is?

By now you should have spotted if you are IO bottlenecked, and if so
then also consider simple things like changes to logging (buffered or
non buffered), which can be stealing what little IO you have left?

If you haven’t figured out whether you are IO bound then consider making
a small ram based cache drive and run the sites off those and disable
logging. This should quickly give you an upper estimate of the
machine’s capabilities and you can work down from there? Obviously test
locally AND remotely and beware silly things like your ISP implementing
some upstream caching/filtering/throttling which is skewing your
results?

Good luck

Ed W

That was exactly it!

I knew it must be something really silly that I’m overseeing.

Would it be good to mention this on the Wiki? I could imagine others
having similar issues.

It might be good also to allow the control of keepalive behavior
separate for GET and POST requests?

Thank you all for your help!

Best,

Jrg

Hello!

On Mon, Jul 18, 2011 at 02:48:47PM +0100, Jürg Lehni wrote:

This is very interesting! I just tried out Chrome and Firefox
and I don’t see these differences in performance. It’s only on
Safari. On Chrome, Nginx seems to have an edge even. How is this
possible?

This is probably due to disabled by default keepalive for Safari
due to problems with POSTs[1]. Try

keepalive_disable msie6;

(default is “msie6 safari”)

[1] 5760 – Safari hangs when uploading files to certain php scripts

Maxim D.

Thanks again for everybody involved in the tracking down of this issue.
I am very relieved to see that Nginx performs indeed at least as fast as
Apache even under little load, and have now switched to using it on all
my servers.

It’s an impressive piece of beautiful software, and I am very happy to
be able to switch to it now.

Best,

Jrg

Hello!

On Mon, Jul 18, 2011 at 03:51:55PM +0100, Jürg Lehni wrote:

That was exactly it!

I knew it must be something really silly that I’m overseeing.

Would it be good to mention this on the Wiki? I could imagine
others having similar issues.

It might be good also to allow the control of keepalive behavior
separate for GET and POST requests?

The problem with Safari is that it may use any closed keepalive
connection to send POST and this will result in problems. That
is, the only way to mitigate the problem is to completely disable
keepalive connections with Safari.

Probably acceptable solution would be to change keepalive_disable
default to “msie6” (which would disable keepalives with old MSIE
after POSTs) leaving “safari” as a workaround option for people
who really use POSTs and see problems with Safari.

Maxim D.