Config file for only static files

Hello guys.

I wanted to run a particular configuration by you guys to get your
thoughts. I’m
moving from lighttpd to nginx.

First a little bit of background. The site is a single server running
FreeBsd.
It’s a Dual Processor Quad Core Xeon 5310 1.60GHz (Clovertown) with a 2
x 8MB
cache and 4 GB RAM. The site serves only static content. There is
absolutely
zero dynamic content. No databases involved. Each static file is about
50 kb.

I get about 3000-3500 requests/second with lightpd and with my initial
setup of
nginx I get about the same. While I’m happy with this I used a very
simple
config file and just wanted to see if the experienced folks over here
could
point out some things that might be able to boost that up even further.
It’s
very simple and short (just about 20 lines) and I hope some of you could
give me
some advise to get more performance (if possible).


worker_processes 4;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;

sendfile        on;
tcp_nopush     on;

keepalive_timeout  65;

gzip  on;
gzip_types      text/plain text/html text/css 

application/x-javascript
text/xml application/xml application/xml+rss
ext/javascript;

server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/local/www/data;
        index  indexd12.html;
    }

    error_page  404              /404.html;

}

}

You might want to consider using pre-compressed files:
http://wiki.codemongers.com/NginxHttpGzipStaticModule
That can save a lot of cpu cycles.

On Sat, Apr 05, 2008 at 03:55:47AM +0000, Amer wrote:

config file and just wanted to see if the experienced folks over here could
}
gzip on;
}

    error_page  404              /404.html;

}

}

As it was suggested, try to use gzip_static.

Also, remove unused MIME types from gzip_types.
There is no application/xml, application/xml+rss, and ext/javascript
in default miem.types. The gzip modules tests Content-Type sequentially,
so the shorter list is the better.

You may need to increase worker_connections, 1024 mean that you are
able to handle 4*1024 connections only. You also need to increase
number of files, sockets, etc in kernel.

If you do not need access_log, you may set it off.
Or, you may use buffered log:

http {

access_log   /path/to/log  buffer=32k;

Also you may marginally decrease number of syscalls using:

timer_resolution 100ms;

And finally use open file descriptor cache to decrease number of
open()/stat()/close() syscalls:

http {

open_file_cache          max=10000  inactive=20s;
open_file_cache_valid    30s;
open_file_cache_min_uses 2;
open_file_cache_errors   on;

However, I do not think that all these settings will result in more
requests/seconds in your environment.

Aamer,
Can you post final config that gave you 3900 reqs/second?
thanks

Thanks for the feedback guys.

Apart from turning gzip_static, I did what you guys suggested and I’m up
to
consistently 3900 Requests/Second in benchmarking.
Possibly with gzip_static, I can break the 4000 mark. Thanks guys !

Hey Guys.

I’ve been trying to figure this out from the site but I just can’t :frowning:

What I want is the following

I tried the following but it I get a 403 forbidden:

 server {
    listen       80;
    server_name  localhost;
    root /usr/local/www/data;

    location / {
      index index.htm;
    }

}

Am I missing something ?

Here is my full config :

worker_processes 5;
timer_resolution 100ms;

events {
worker_connections 1500;
}

http {
include mime.types;

sendfile        on;
tcp_nopush     on;

keepalive_timeout  65;

gzip_static on;

gzip_types  text/plain text/html text/css
gzip_http_version   1.1;
gzip_proxied        any;
gzip_disable        "MSIE [1-6]\.";
gzip_vary           on;

open_file_cache          max=10000  inactive=20s;
open_file_cache_valid    30s;
open_file_cache_min_uses 2;
open_file_cache_errors   on;

server {
    listen       80;
    server_name  localhost;
    root /usr/local/www/data;

    location / {
      index index.htm;
    }

}
}

Give the gzip_static module a try to avoid gzip’ing content on the fly

Use 7za a -tzip -mx9 filename.gz filename

To achieve the maximum compression up front

Cheers

Dave

Okay I’ve crossed 4000 (consistently between 4100 and 4200) now but I
had a
couple of questions.

First of all, I’m using
nginx-0.6.29.tar.gzhttp://sysoev.ru/nginx/nginx-0.6.29.tar.gzfrom
http://sysoev.ru/nginx/download.html … I build it from source. I don’t
understand the Russian on that site but it seems to me
that this is a developement version? Is this stable to use in
production?

Secondly I noticed something strange with gzip_static on. If I have this
in
my conf and I have a file indexd12.htm as well as indexd12.htm.gz in the
root then even though I have
indexd12.htm.gz it still picks up indexd12.htm … However when I delete
indexd12.htm , then the server rightly sends back indexdh1.htm.gz …
According to http://wiki.codemongers.com/NginxHttpGzipStaticModule,
the server should be sending back indexd12.htm.gz even if there is an
indexd12.htm in the directory. Any thoughts?

rkmr.em, this is my final config file (I also bumped up max allowed file
descriptors in freebsd kernel):

worker_processes 5;
timer_resolution 100ms;

events {
worker_connections 1500;
}

http {
include mime.types;
default_type application/octet-stream;

sendfile        on;
tcp_nopush     on;

keepalive_timeout  65;

gzip_static on;

gzip_http_version   1.1;
gzip_proxied        expired no-cache no-store private auth;
gzip_disable        "MSIE [1-6]\.";
gzip_vary           on;

open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

 server {
    listen       8080;
    server_name  localhost;

    location / {
        root   /usr/local/www/data;
        index  indexd12.htm;
    }
 }

}

I’m doing that now and it gives a massive boost to performance and file
sizes! Dave, any thoughts on the 403 issue?

Thanks.

Hi Amer,

On Sam 05.04.2008 22:01, Amer Shah wrote:

I tried the following but it I get a 403 forbidden:

Am I missing something ?

does user nobody is allowed to read your root?

http://wiki.codemongers.com/NginxMainModule#user

What is in the error_log?

http://wiki.codemongers.com/NginxMainModule#error_log

Cheers

Aleks

Thanks for the reply Aleks!

I changed my conf to have www www. I put this at the top of my conf:

user www www;

Now, to confirm:

DX-20070509-049# ps aux | grep nginx
root 84389 0.0 0.1 2040 1632 ?? Is 3:44AM 0:00.00 nginx:
master
process ./nginx
www 84390 0.0 0.1 2360 1968 ?? S 3:44AM 0:00.03 nginx:
worker
process (nginx)
www 84391 0.0 0.1 2360 1968 ?? S 3:44AM 0:00.02 nginx:
worker
process (nginx)
www 84392 0.0 0.1 2360 1968 ?? S 3:44AM 0:00.03 nginx:
worker
process (nginx)
www 84393 0.0 0.1 2360 2020 ?? S 3:44AM 0:00.02 nginx:
worker
process (nginx)
www 84394 0.0 0.1 2360 1968 ?? S 3:44AM 0:00.02 nginx:
worker
process (nginx)
root 84407 0.0 0.0 1552 1036 p0 R+ 3:46AM 0:00.00 grep nginx

And finally, the permissions:

DX-20070509-049# cd /usr/local/www
DX-20070509-049# ll
total 14
lrwxr-xr-x 1 root wheel 27 Apr 2 21:41 cgi-bin ->
/usr/local/www/cgi-bin-dist
dr-xr-xr-x 2 root wheel 512 Apr 2 21:41 cgi-bin-dist
drwxr-xr-x 9 root www 3584 Apr 6 03:45 data
dr-xr-xr-x 2 root wheel 1024 Apr 2 21:41 data-dist
drwxr-xr-x 3 root wheel 3584 Apr 2 21:41 icons
drwxr-xr-x 2 www www 512 Apr 6 03:25 proxy

It seems www has permissions to /usr/local/www/data

My error log shows this :

2008/04/06 03:44:38 [error] 84393#0: *1 directory index of
“/usr/local/www/data/” is forbidden

On Sun, Apr 06, 2008 at 04:07:08AM -0400, Amer Shah wrote:

It seems www has permissions to /usr/local/www/data

My error log shows this :

2008/04/06 03:44:38 [error] 84393#0: *1 directory index of
“/usr/local/www/data/” is forbidden

What does show

ls -l /usr/local/www/data/index.htm

?

On Sat, Apr 05, 2008 at 06:41:16PM -0400, Amer Shah wrote:

First of all, I’m using
nginx-0.6.29.tar.gzhttp://sysoev.ru/nginx/nginx-0.6.29.tar.gzfrom
http://sysoev.ru/nginx/download.html … I build it from source. I don’t
understand the Russian on that site but it seems to me
that this is a developement version? Is this stable to use in production?

It’s stable enough. I use it on most my production sites.

Secondly I noticed something strange with gzip_static on. If I have this in
my conf and I have a file indexd12.htm as well as indexd12.htm.gz in the
root then even though I have
indexd12.htm.gz it still picks up indexd12.htm … However when I delete
indexd12.htm , then the server rightly sends back indexdh1.htm.gz …
According to http://wiki.codemongers.com/NginxHttpGzipStaticModule,
the server should be sending back indexd12.htm.gz even if there is an
indexd12.htm in the directory. Any thoughts?

It’s strange. Could you create debug log ?

There is no index.htm … I’m using gzip_static and only have
index.htm.gz
there .

DX-20070509-049# ls -l /usr/local/www/data/index.htm.gz
-rw-r–r-- 1 root www 5552 Apr 6 03:49
/usr/local/www/data/index.htm.gz

Also note that when i do www.hostname.com/index.htm , it works fine.

There is your problem, you need the source, non gzip’ed file for
clients that don’t request content-encoding. From my understanding,
nginx locates the requested file, then looks aside for a gziped
version if gzip_static is enable (and the original file has a matching
mime type?)

Cheers

Dave

Yes this indeed was the problem. Thanks guys, you’re the best!

What I don’t get though is why when I hit index.htm directly it works
(even
though there is no index.htm but only index.htm.gz) but not when i hit
the
www.hostname.com … In both cases I am using the same client (browser).
Shouldn’t in both cases, I be served with the gz file ?

On Sun, Apr 06, 2008 at 04:16:49AM -0400, Amer Shah wrote:

There is no index.htm … I’m using gzip_static and only have index.htm.gz
there .

DX-20070509-049# ls -l /usr/local/www/data/index.htm.gz
-rw-r–r-- 1 root www 5552 Apr 6 03:49 /usr/local/www/data/index.htm.gz

Also note that when i do www.hostname.com/index.htm , it works fine.

Because your browser supports gzipped content and it’s not disabled
in your configuraiton.

You should have two files: gzipped and ungzipped.
Or you may have ungzipped one only.

On Sun, Apr 06, 2008 at 06:26:47PM +1000, Dave C. wrote:

There is your problem, you need the source, non gzip’ed file for
clients that don’t request content-encoding. From my understanding,
nginx locates the requested file, then looks aside for a gziped
version if gzip_static is enable (and the original file has a matching
mime type?)

No, gzip_types are not checked for static files.
It’s assumed, that admin has gzipped right files.

The following directives are checked, if gzip_static is on:
gzip_http_version, gzip_proxied, and gzip_disable.

On Sun, Apr 06, 2008 at 04:36:37AM -0400, Amer Shah wrote:

Yes this indeed was the problem. Thanks guys, you’re the best!

What I don’t get though is why when I hit index.htm directly it works (even
though there is no index.htm but only index.htm.gz) but not when i hit the
www.hostname.com … In both cases I am using the same client (browser).
Shouldn’t in both cases, I be served with the gz file ?

Because your browser supports gzipped content and it (browser) is not
disabled in your configuraiton.

ngx_http_index_module tests index.htm existance only. It does not know
anything about any .gz files.

Ahh now I get it. Makes sense.

It’s quite an easy fix for me because among the hundreds of gzipped
files I
have (and no matching .htm files) I only have to make sure that I have
an
htm version of index.htm.gz lying around since that’s
the only one referenced in ngx_http_index_module .

Thanks for your help and patience Igor.

Amer.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs