Hi,
I did a simple load test for 1.2 and found that 1.2 is slower than
1.1.12 ?
client—nginx(1.2/.1.12)----nginx(1.1.12)
The tested nginx worked as a reverse proxy before another nginx box
which serve static file, upstream keepalive is not enbaled.
Same test suite , the only difference is the version of nginx.
1.2.0 is about 30% slower .
BR,
DeltaY
On Thursday 26 April 2012 13:50:40 Delta Y. wrote:
1.2.0 is about 30% slower .
Can you provide more details about how you tested it and what test suite
you
used?
Personally, I found that some test suites are actually can perform
slower than
nginx and produce incorrect results (e.g. disable logging decrease RPS).
wbr, Valentin V. Bartenev
I use jemeter to get a static file and observe the request per second.
For nginx 1.2, I observe the result output stuck for a second ,then
decrease.
After 2 or 3 times of such stuck, the result become stable.
在 2012年4月26日 下午6:35,Valentin V. Bartenev [email protected] 写道:
I use loadrunner, and although get the conclusion:1.2.0 is slower than
1.1.12
I observe connect timeout error in loadrunner console, but I didn’t see
such
error in 1.1.12.
For 1.2.0 , the rps line dive to very lower every 15-20 seconds, I
think
it because connect to nginx timeout .
I test 1.1.17 , the result is almost the same as 1.1.12.
I then test 1.1.19 , the result is like 1.2.0,
So what happened between 1.1.17 and 1.1.19 will result the connection
timeout every 15-20 seconds?
在 2012年4月26日 下午10:26,Valentin V. Bartenev [email protected] 写道:
On 27.04.2012, at 11:12, Delta Y. wrote:
So what happened between 1.1.17 and 1.1.19 will result the connection
timeout every 15-20 seconds?
Could you show your config? And what OS are you using?
On Thursday 26 April 2012 18:07:48 Delta Y. wrote:
I use jemeter to get a static file and observe the request per second.
For nginx 1.2, I observe the result output stuck for a second ,then
decrease. After 2 or 3 times of such stuck, the result become stable.
Can you reproduce it with other benchmarking tools (httperf, siege)?
Do you run it on the same host or from several dedicated machines?
I assume that you’re benchmarking JMeter itself.
wbr, Valentin V. Bartenev
The loadrunner output of 1.1.19 is attached, there are many connection
timeout error reported by loadrunner.
My latest test show that 1.1.18 is OK , so changes between 1.1.18 and
1.1.19
make things worse in my test lab.
the os is debian squeeze 32bit, the config is :
user root root;
worker_rlimit_nofile 81920;
worker_processes 2;
pid /secone/var/run/nginx.pid;
error_log /dev/null error;
pcre_jit on;
events {
use epoll;
worker_connections 12000;
multi_accept on;
accept_mutex off;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
reset_timedout_connection on;
ignore_invalid_headers on;
underscores_in_headers on;
proxy_buffering on;
proxy_ignore_client_abort on;
proxy_intercept_errors on;
client_header_buffer_size 16k;
large_client_header_buffers 4 16k;
msie_padding off;
msie_refresh off;
access_log off;
upstream upstream_www {
server 192.168.2.3:80 max_fails=0 ;
server 192.168.2.4:80 max_fails=0 ;
server 192.168.2.5:80 max_fails=0 ;
server 192.168.2.6:80 max_fails=0 ;
}
upstream upstream_www2 {
server 192.168.2.254:80 max_fails=0 ;
}
server {
listen 192.168.1.1:80 ;
server_name _;
proxy_set_header Host $http_host;
location / {
proxy_pass http://upstream_www2;
proxy_redirect default;
}
}
server {
client_header_timeout 60;
client_body_timeout 120;
client_max_body_size 100m;
client_header_buffer_size 16k;
large_client_header_buffers 4 16k;
proxy_buffer_size 16k;
proxy_buffers 4 16k;
listen 192.168.1.1:80 ;
server_name www.test.com;
ssl off;
keepalive_timeout 0;
proxy_cache_key $scheme$host$request_uri;
location / {
proxy_set_header Accept-Encoding identity;
proxy_set_header Host $http_host;
proxy_next_upstream error timeout http_500 http_502 http_503
http_504;
proxy_cache off;
proxy_pass http://upstream_www;
}
}
}
在 2012年4月27日 下午4:08,Sergey B. [email protected] 写道:
Hello!
On Fri, Apr 27, 2012 at 06:51:04PM +0800, Delta Y. wrote:
After revert svn r4577 of 1.2.0 , I get almost the same benchmark
result with 1.2.0 and 1.1.17.
So it seems r4577 descrease nginx 1.2.0 performance by 30-40%( a lot
of connect timeout reported by loadrunner) in my test lab.
nginx box OS is debian squeeze 32bit, CPU is intel core2 .
Which compiler do you use? Which compiler flags used during
compilation?
Maxim D.
After revert svn r4577 of 1.2.0 , I get almost the same benchmark
result with 1.2.0 and 1.1.17.
So it seems r4577 descrease nginx 1.2.0 performance by 30-40%( a lot
of connect timeout reported by loadrunner) in my test lab.
nginx box OS is debian squeeze 32bit, CPU is intel core2 .
在 2012年4月27日 下午4:43,Delta Y. [email protected] 写道:
Using built-in specs.
Target: i486-linux-gnu
Configured with: …/src/configure -v --with-pkgversion=‘Debian
4.3.5-4’ --with-bugurl=file:///usr/share/doc/gcc-4.3/README.Bugs
–enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr
–enable-shared --enable-multiarch --enable-linker-build-id
–with-system-zlib --libexecdir=/usr/lib --without-included-gettext
–enable-threads=posix --enable-nls
–with-gxx-include-dir=/usr/include/c++/4.3 --program-suffix=-4.3
–enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc
–enable-mpfr --enable-targets=all --with-tune=generic
–enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu
–target=i486-linux-gnu
Thread model: posix
gcc version 4.3.5 (Debian 4.3.5-4)
./nginx -V
nginx version: nginx/2.1.2.0
built by gcc 4.3.5 (Debian 4.3.5-4)
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --pid-path=/var
–with-cpu-opt=pentium4 --with-ipv6 --with-pcre=/usr/local/pcre
–with-pcre-jit --without-http_autoindex_module
–without-http_ssi_module --without-http_referer_module
–without-http_userid_module --without-http_empty_gif_module
–without-http_limit_req_module --without-http_browser_module
–without-http_memcached_module --without-http_charset_module
–without-http_split_clients_module --with-http_stub_status_module
–with-http_ssl_module --with-http_realip_module
–with-http_sub_module --with-http_geoip_module -
在 2012年4月27日 下午6:57,Maxim D. [email protected] 写道:
Hello!
On Fri, Apr 27, 2012 at 07:06:02PM +0800, Delta Y. wrote:
–enable-mpfr --enable-targets=all --with-tune=generic
–enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu
–target=i486-linux-gnu
Thread model: posix
gcc version 4.3.5 (Debian 4.3.5-4)
./nginx -V
nginx version: nginx/2.1.2.0
This doesn’t looks like vanilla nginx. Could you please test
vanilla one, without any patches?
–with-http_ssl_module --with-http_realip_module
–with-http_sub_module --with-http_geoip_module -
The output looks truncated (note trailing “-”), is it?
In any case, please provide sample of an actual gcc command as
used during compilation of nginx object files, it may contain
various flags not visible in “nginx -V” output.
Maxim D.
在 2012年4月27日 下午8:22,Maxim D. [email protected] 写道:
–with-system-zlib --libexecdir=/usr/lib --without-included-gettext
./nginx -V
nginx version: nginx/2.1.2.0
This doesn’t looks like vanilla nginx. Could you please test
vanilla one, without any patches?
It’s an openresty like nginx bundle ,consist of modules for reverse
proxy only.
No thirdparty patch applied except the one “proxy_bind support
variable”.
Maxim, I remember it is from you,
The major 2 in version string means upstream keepalive feaure only .
–with-http_ssl_module --with-http_realip_module
–with-http_sub_module --with-http_geoip_module -
The output looks truncated (note trailing “-”), is it?
In any case, please provide sample of an actual gcc command as
used during compilation of nginx object files, it may contain
various flags not visible in “nginx -V” output.
The shell script to build nginx is :
DEPLOY_TARGET=“/home/ngxproxy”
INSTALL_DIR=“/home/ngxproxy”
tell nginx’s build system where to find lua:
#export LUA_LIB=/usr/local/lib
#export LUA_INC=/usr/local/include
#export LUA_LIB=/usr/lib
#export LUA_INC=/usr/include/lua5.1
or tell where to find LuaJIT when you want to use JIT instead
export LUAJIT_LIB=$DEPLOY_TARGET/lib
export LUAJIT_INC=$DEPLOY_TARGET/include/luajit-2.0
./configure --prefix=$INSTALL_DIR
–with-cpu-opt=pentium4
–with-ipv6
–with-pcre=$DEPLOY_TARGET/pcre
–with-pcre-jit
–with-pcre-jit
–without-http_autoindex_module
–without-http_ssi_module
–without-http_referer_module
–without-http_userid_module
–without-http_empty_gif_module
–without-http_limit_req_module
–without-http_browser_module
–without-http_memcached_module
–without-http_charset_module
–without-http_split_clients_module
–with-http_stub_status_module
–with-http_ssl_module
–with-http_realip_module
–with-http_sub_module
–with-http_geoip_module
–add-module=src/http/modules/ngx_devel_kit
–add-module=src/http/modules/lua-nginx
–add-module=src/http/modules/headers_more
–add-module=src/http/modules/substitute
–add-module=src/http/modules/ngx_iconv
make
make install
Hello!
On Fri, Apr 27, 2012 at 09:08:45PM +0800, Delta Y. wrote:
–enable-shared --enable-multiarch --enable-linker-build-id
No thirdparty patch applied except the one “proxy_bind support variable”.
Maxim, I remember it is from you,
The major 2 in version string means upstream keepalive feaure only .
Again: it’s really good idea to test vanilla nginx, without any
patches and 3rd party modules, especially when debugging such
subtle problems. Even trivial patch might cause unspecified
behaviour e.g. if it applied incorrectly due to context code
changes. And the similar thing applies to 3rd party modules.
–with-http_ssl_module --with-http_realip_module
–with-http_sub_module --with-http_geoip_module -
The output looks truncated (note trailing “-”), is it?
In any case, please provide sample of an actual gcc command as
used during compilation of nginx object files, it may contain
various flags not visible in “nginx -V” output.
The shell script to build nginx is :
I need a gcc command as shown during “make”, like this:
gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter
-Wunused-function -Wunused-variable -Wunused-value -Werror -g -I
src/core -I src/event -I src/event/modules -I src/os/unix -I objs
-o objs/src/core/nginx.o
src/core/nginx.c
Maxim D.