Active connections of nginx continually increase

I have several nginx web servers as reverse proxy.

I found out that the active connections(including reading writing and
waiting , seen from http_stub_status module) in some of the
servers(not all of them) are keep growing
from 3000 to 5000, 10000 … 10k … 50k, and never reduce even in the
late night.

at the same time , I got a more reliable number from netstat
netstat -nap | grep 80 | grep EST | wc -l
2743

the keepalive_timeout is 10 sec

the worker processes are all started at the same time

5265 nginx: master process         6-19:18:55 May19
24498  \_ nginx: worker process         59:34 19:16
24499  \_ nginx: worker process         59:34 19:16
24500  \_ nginx: worker process         59:34 19:16
24501  \_ nginx: worker process         59:34 19:16
24502  \_ nginx: cache manager pr       59:34 19:16

I’ve found a similar problem at:

http://markmail.org/search/?q=Upload+module+%2B+PHP+causes+active+connections+to+continually#query:Upload%20module%20%2B%20PHP%20causes%20active%20connections%20to%20continually+page:1+mid:fdgyk6v32lnvaxul+state:results

but seems not the same with me. There’s also no related error in
error.log

the system is

cat /etc/issue
CentOS release 5.3 (Final)
Kernel \r on an \m

uname -a
Linux 2.6.18-128.el5xen #1 SMP Wed Jan 21 11:12:42 EST 2009 x86_64

x86_64 x86_64 GNU/Linux

the nginx version:

nginx -V
nginx version: Nginx/1.0.14
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-44)
TLS SNI support disabled
configure arguments: --prefix=/home/web/nginx/ --user=nobody

–group=nobody
–with- http_ssl_module --with-http_sub_module
–with-http_dav_module
–with-http_flv_module – with-http_gzip_static_module
–with-http_stub_status_module
–http-proxy-temp-path=/home/web/nginx/data/proxy
–http-fastcgi-temp-path=/home/web/nginx/data/fastcgi
–http-client-body-temp-path=/home/web/nginx/data/client
–with-pcre=…/pcre-7.9
–add-module=…/ngx_http_upstream_keepalive-d7643c291ef0
–add-module=…/hmux/ --add-module=…/nginx-sticky-module-1.0/
–with-google_perftools_module
–add-module=…/nginx_upstream_check_module-660183a

the modules are:

1: for cookie sticky
nginx-sticky-module.googlecode.com
2: hmux module for resin
code.google.com/p/nginx-hmux-module/
3: upstream check module
github.com/yaoweibin/nginx_upstream_check_module
4: upstream keepalive
mdounin.ru/hg/ngx_http_upstream_keepalive/

all patches are applied to nginx src code.

nginx.conf:

user  nobody;
worker_processes  4;
worker_cpu_affinity 0001 0010 0100 1000;
google_perftools_profiles /home/web/nginx/tcmalloc/tc;

events {
    worker_connections 51200;
    use epoll;
    epoll_events 4096;
    multi_accept on;
    accept_mutex off;   }

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] '
                  '$status $body_bytes_sent ';


    access_log  logs/access.log  main;

    sendfile        on;
    keepalive_timeout  10;

    server_tokens off;

    gzip  on;
    gzip_types  text/plain text/css application/x-javascript

text/xml application/json application/xml application/xml+rss
text/javascript;
gzip_vary on;

    server_names_hash_max_size 4096;
    proxy_buffer_size   64k;
    proxy_buffers       8 64k;
    proxy_busy_buffers_size     64k;
    client_header_buffer_size 64k;
    large_client_header_buffers 4 64k;
    proxy_headers_hash_max_size 1024;
    proxy_headers_hash_bucket_size 128;
    client_max_body_size 25m;

upstream backend{
    check interval=5000 fall=3 rise=2 timeout=2000

default_down=false type=tcp;
keepalive 1024;
server server1:80;
server server2:80;
}
server {
listen 80;
server_name xxx;

    location / {
         proxy_pass http://backend;
    }

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
        root   html;
    }
}

}

Posted at Nginx Forum:

from 3000 to 5000, 10000 … 10k … 50k, and never reduce even in the late
night.

Just to be sure - are you not by any chance reading the all time total
values (3rd line) instead of the actual ones which are on
4th?
e.g. can you show what’s your ‘stub_status on’; location output actually
looks like?

rr

Reinis R. Wrote:

from 3000 to 5000, 10000 … 10k … 50k, and
never reduce even in the late night.

Just to be sure - are you not by any chance
reading the all time total values (3rd line)
instead of the actual ones which are on
4th?
e.g. can you show what’s your ‘stub_status on’;
location output actually looks like?

thanks for your remind. I’m for sure to record the 1st line and 4th
line of the status output

curl http://server1/status
Active connections: 40265
server accepts handled requests
16856987 16856987 28380346
Reading: 2583 Writing: 267 Waiting: 37415

and on the web server at the same time:
netstat -nap | grep EST | grep 80 | wc -l
3818

I recorded the numbers using the following scripts, it has been tested.

RAWdata=curl http://${Host}/status 2>/dev/null
ToNagios=echo $RAWdata | awk '{ if ($3 > '$Active_Num_WARNING' || $12 > '$Read_Num_WARNING' || $14 > '$Write_Num_WARNING' || $16 > '$Wait_Num_WARNING' ) printf( "'$Host'" " Warning |Active_connections_are="$3 ";;;; Reading="$12 ";;;; Writing="$14 ";;;; Waiting="$16 ";;;;") else printf( "'$Host'" " OK |Active_connections_are="$3 ";;;; Reading="$12 ";;;; Writing="$14 ";;;; Waiting="$16 ";;;;") }'

sample output:
[xxx@desktop ~]$ RAWdata=curl http://server1/status 2>/dev/null
[xxx@desktop ~]$ echo $RAWdata
Active connections: 44476 server accepts handled requests 17371377
17371377 29208687 Reading: 2839 Writing: 312 Waiting: 41325

from the diagram recorded, you can see not only waiting but also reading
, writing are all increasing

rr


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

Hello!

On Fri, May 25, 2012 at 11:58:22AM -0400, tntzwz wrote:

2743

[…]

–http-proxy-temp-path=/home/web/nginx/data/proxy
–http-fastcgi-temp-path=/home/web/nginx/data/fastcgi
–http-client-body-temp-path=/home/web/nginx/data/client
–with-pcre=…/pcre-7.9
–add-module=…/ngx_http_upstream_keepalive-d7643c291ef0
–add-module=…/hmux/ --add-module=…/nginx-sticky-module-1.0/
–with-google_perftools_module
–add-module=…/nginx_upstream_check_module-660183a

Obvious suggestion is: try compiling nginx without any third party
modules and patches.

Maxim D.

Obvious suggestion is: try compiling nginx without
any third party
modules and patches.

thanks, I’ll try this

Maxim D.


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum: