Nginx performance test

Hi,
I’ve a Dell R410 with dual Xeon E5530 with 8gb ddr so I would like to
test/discover nginx perfomances.

Here my config file /usr/local/etc/nginx/nginx.conf:
orker_processes 10;

events {
accept_mutex off;
worker_connections 8192;
use kqueue;
}

http {
server_names_hash_bucket_size 64;

include             /usr/local/etc/nginx/mime.types;
default_type        application/octet-stream;

log_format upstream '$remote_addr - $host - [$time_local] '
                '"$request" $status $body_bytes_sent "$http_referer" 


‘“$http_user_agent” “$http_x_forwarded_for”
[$upstream_addr]’;

access_log          /var/log/nginx/nginx-access.log;
error_log           /var/log/nginx-error.log;

# spool uploads to disk instead of clobbering downstream servers
client_body_temp_path /var/spool/nginx-client-body 1 2;
client_max_body_size 32m;
client_body_buffer_size    2048k;

sendfile            on;
tcp_nopush          on;
tcp_nodelay         off;

keepalive_timeout   1;

# proxy settings
proxy_redirect     off;

proxy_set_header   Host             $host;
proxy_set_header   X-Real-IP        $remote_addr;
proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;

proxy_connect_timeout      300;
proxy_send_timeout         300;
proxy_read_timeout         300;

proxy_buffer_size          32k;
proxy_buffers              4 64k;
proxy_busy_buffers_size    64k;
proxy_temp_file_write_size 64k;
include             /usr/local/etc/nginx/upstream.conf;
include             /usr/local/etc/nginx/sites/*.conf;

}

And a vhost is defined:

proxy_cache_path /usr/local/www/xxx/cache levels=1:2 keys_zone=XXX:10m
inactive=24h max_size=1g;

server {
ssl on;
ssl_certificate /usr/local/etc/nginx/certs/xxx.pem;
ssl_certificate_key /usr/local/etc/nginx/certs/xxx.key;
keepalive_timeout 70;

listen kkk.kkk.kkk.kkk:443;
server_name xxx.*;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_store_access user:rw group:rw all:r;

proxy_connect_timeout 1200;
proxy_send_timeout 1200;
proxy_read_timeout 1200;

access_log /var/log/nginx/xxx-access.log upstream;
error_log /var/log/nginx/xxx-error.log;

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}

location ~ .(gif|jpg|png)$ {
proxy_pass http://zzz.zzz.zzz.zzz:80;
proxy_cache XXX;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating http_500
http_502 http_503 http_504;
}

location / {
proxy_pass http://zzz.zzz.zzz.zzz:80;
}
}

May I use httperf? jmeter? Using the config file I could estimate how
many static pages could I serve per period?
Other statistics?

Thanks,
d.

On Mon, Jan 18, 2010 at 1:35 PM, Davide D’Amico
[email protected] wrote:

}

tcp_nodelay off;

}

proxy_read_timeout 1200;
proxy_pass http://zzz.zzz.zzz.zzz:80;

nginx mailing list
[email protected]
nginx Info Page

You may use whatever tools you like. I would recommend testing with as
many as possible from external systems, and I like to use siege and
httperf personally. You’re also testing your OS (and your network,
apparently) as well as nginx, so keep in mind you may have to do
further tuning.

That said, I’m not sure exactly what it is you are intending to test
with this setup. I would also note that NginX is not a forward proxy,
and your configuration at least appears to be acting like one… so I
don’t think your “results” whatever they will be will be particularly
useful to anyone but you.

And I always wonder when I see it in configurations… Did you REALLY
have too long of a server_name that you had to change the hash bucket
size? Server names

– Merlin

/me wonders why nginx can’t automatically allocate the RAM needed for
server_names based on the configuration? it’s static (except for
wildcarded/regex matched ones, that’s the only variable here)

On Mon, 2010-01-18 at 13:42 -0800, merlin corey wrote:

And I always wonder when I see it in configurations… Did you REALLY
have too long of a server_name that you had to change the hash bucket
size? Server names

More likely, too many server_names. I have had to adjust this on
some servers that act as shared hosting systems.

Regards,
Cliff

On Mon, Jan 18, 2010 at 1:35 PM, Davide D’Amico

? ?use kqueue;
? ? ? ? ? ? ? ? ? ?‘“$http_user_agent” “$http_x_forwarded_for” [$upstream_addr]’;
? ?tcp_nopush ? ? ? ? ?on;
? ?proxy_max_temp_file_size 0;
? ?include ? ? ? ? ? ? /usr/local/etc/nginx/sites/*.conf;
?keepalive_timeout ? ?70;
?proxy_send_timeout ? ? ? ? 1200;
?location ~ .(gif|jpg|png)$ {


nginx mailing list
[email protected]
nginx Info Page

You may use whatever tools you like. I would recommend testing with as
many as possible from external systems, and I like to use siege and
httperf personally. You’re also testing your OS (and your network,
apparently) as well as nginx, so keep in mind you may have to do
further tuning.
Yes, I’ve tuned my OS increasing a lot of sysctl:
dave@goose:~> more /etc/sysctl.conf
security.bsd.see_other_uids=0
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.carp.preempt=1
net.inet.carp.arpbalance=1
net.inet.tcp.sendspace=65536
net.inet.tcp.recvspace=65536
net.link.ether.inet.log_arp_wrong_iface=0
kern.ipc.somaxconn=1024

A lot of them are BSD specific but I think they are understandable :slight_smile:

The OS (FBSD 8.0-p2) has device polling enabled to increase network
response on heavy load.

That said, I’m not sure exactly what it is you are intending to test
with this setup. I would also note that NginX is not a forward proxy,
and your configuration at least appears to be acting like one… so I
don’t think your “results” whatever they will be will be particularly
useful to anyone but you.
No, nginx is not a forward proxy: it passes all request to a node (this
is a special config,
this is a more generic vhost definition:

server {
listen 172.16.6.50:80;
server_name DDDddddd,

access_log /var/log/nginx/cw-access.log;
error_log /var/log/nginx/cw-error.log;

location / {
rewrite ^/(.*)$ http://www.DDDDDDDDDDDDDDDDD.com redirect;
}
location /LOC1 {
proxy_pass http://1_backend;
}
location /LOC2 {
proxy_pass http://2_backend;
}
location /LOC3 {
proxy_pass http://2_backend;
}
location /LOC4 {
proxy_pass http://1_2_4_backend;
}
location /LOC5 {
proxy_pass http://1_2_4_backend;
}
location /LOC6 {
proxy_pass http://1_2_4_backend;
}
}

where these are backends:

upstream 1_2_4_backend {
server 172.16.6.61:80;
server 172.16.6.62:80;
server 172.16.6.64:80;
server 127.0.0.1:8080 backup;
}

Where 127.0.0.1:8080 is a backup backend serving a courtesy page (all
other backends are faulty).

And I always wonder when I see it in configurations… Did you REALLY
have too long of a server_name that you had to change the hash bucket
size? Server names

Oh, it was an old setting I could remove.

Thanks,
d.

kern.ipc.somaxconn=1024

you might need to increase that one

Thank you,
d.

merlin corey wrote:

And I always wonder when I see it in configurations… Did you REALLY have too long of a server_name that you had to change the hash bucket size?

Older versions, such as 0.6.32, would refuse to start unless you
increased it to 64, even with a couple of simple server names. I took
the directive out when I upgraded to 0.8.32, maybe some people didn’t.

Tobia

На понеделник 18 януари 2010 23:29:55 Davide D’Amico написа:

kern.ipc.somaxconn=1024

you might need to increase that one

Momchil