TCP Socket TIME_WAIT problem

Hi folks,
I’m deeply testing NGINX to understand boundaries and limits because I’d
like to create a webservice for an high performance application in our
LAN. I’m currently running NGINX 1.2.1 on Linux and I’m just serving a
static webpage (“Hello World”) to see
performances. This website is accessed on localhost from a dummy bash
script:
#!/bin/bash
for i in {0…50000}; do
echo $i
wget http://127.0.0.1/test.html -q -O - >> /dev/null
done

In other terminals I’m monitoring sockets and their status ( netstat -an
|grep WAIT|wc -l )
When I run the bash script I can see no troubles until I reach 28233
TIME_WAIT connections, then it freezes for a while until some TIME_WAIT
connections are closed. Performances are ok until I reach this limit,
then everything is frozen for a while until TCP time_wait connections
are disposed
After few researches with google I’ve found these info:
http://developerweb.net/viewtopic.php?id=2941
http://developerweb.net/viewtopic.php?id=2982

I’m running tests on a pretty fast machine: Intel core i7 with 8Gb RAM
and a Gentoo Linux OS with kernel 3.12, is there a way to get a rid of
this limit ? I’ll post here my nginx.conf but please let me know if you
need more info or config files

user nginx nginx;
worker_processes 1;
error_log /var/log/nginx/error_log info;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] ’
'“$request” $status $bytes_sent ’
'“$http_referer” “$http_user_agent” ’
‘“$gzip_ratio”’;
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_types text/plain;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
index index.php index.html;
server {
listen 127.0.0.1;
server_name localhost;
root /var/www/localhost/htdocs;
location ~ .*.php$ {
include /etc/nginx/fastcgi.conf;
fastcgi_pass 127.0.0.1:1234;
fastcgi_index index.php;
}
}
}

Thanks in advance for your reply
Ben

Posted at Nginx Forum:

Hello!

On Fri, Jun 22, 2012 at 05:51:46AM -0400, andreabenini wrote:

wget http://127.0.0.1/test.html  -q -O - >> /dev/null

2.7 - Please explain the TIME_WAIT state (Page 1) / UNIX Socket FAQ / UNIX Socket FAQ
4.6 - What exactly does SO_LINGER do? (Page 1) / UNIX Socket FAQ / UNIX Socket FAQ

I’m running tests on a pretty fast machine: Intel core i7 with 8Gb RAM
and a Gentoo Linux OS with kernel 3.12, is there a way to get a rid of
this limit ? I’ll post here my nginx.conf but please let me know if you
need more info or config files

The problem you’ve hit is local port exhaustion on client side due
sockets in TIME_WAIT state. Possible solutions are:

  1. Add more client local ports. Trivial way to do so is to add
    more clients (or, rather, client IPs). You may also tune
    single client to use wider local port range
    (/proc/sys/net/ipv4/ip_local_port_range on linux), though it’s
    limited to 64k anyway.

  2. Configure your system to reuse/recycle timewait sockets
    (tcp_tw_reuse, tcp_tw_recycle in the same /proc/ directory on
    Linux).

  3. Reduce MSL as used on your system (no idea if it’s tunable on
    Linux), this will cause TIME_WAIT sockets to expire faster.

Maxim D.