After 1 minute, I get this error: "connect() to 127.0.0.1:8080 failed (99: Cannot assign requested a

Hi,

First of all, my environment:

  • About 1.6 GB RAM, which doesn’t seem to be a bottleneck because
    actually
    I’m barely using it.
  • CPU fast enough (I guess)
  • Ubuntu 12.0.4 (32 bits, probably thats irrelevant here)
  • My users make requests using port 80 (actually not specifyng the port)
    to
    call a service I’m running on my server.
  • Nginx 1.4.3 receives the requests, and then derives them to Tomcat
    7.0.33
  • Tomcat 7.0.33 is running on port 8080.

My website/service has been always running fine. I’m making stress tests
in
order to see if I can handle about 1000 queries per second, and I’m
getting
this error message in Nginx’s log:

2014/02/12 09:59:42 [crit] 806#0: *595361 connect() to
127.0.0.1:8080failed (99: Cannot assign requested address) while
connecting to upstream,
client: 58.81.5.31, server: api.acme.com, request: “GET
/iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46 HTTP/1.1”,
upstream: "
http://127.0.0.1:8080/iplocate/locate?key=UZ6FD8747F76VZ&ip=61.157.194.46",
host: “services.acme.com

My users are geting a “BAD GATEWAY” error status, which they are getting
in
their Java clients with an exception. My interpretation is that Nginx is
suddenly unable to communicate with Tomcat, so it delivers an HTTP error
status code “BAD GATEWAY”.

It starts running fine, but after 1-2 minutes I start geting this error.
Obviously I’m running out of some kind of resource (ports?). If I wait
for
a few minutes to let the system “rest”, then it works again for a while
(1
minute maybe) and then it fails again.

If I run this command (which I suspect is relevant):

       sysctl net.ipv4.ip_local_port_range

I get this response, which I think its standard in Ubuntu (I haven’t
messed
with it):

  root@ip-10-41-156-142:~# sysctl net.ipv4.ip_local_port_range
  net.ipv4.ip_local_port_range = 32768    61000

I have read some postings about configuring ports in order to get rid of
this error message, but I don’t know if that is my problem.

Could somebody please help me?

Brian

============================== NGINX CONFIGURATION FOLLOWS===========
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;

multi_accept on;

}
http {
limit_req_status 429;
map $arg_capacity $1X_key{~*^1X$ $http_x_forwarded_for;default “”;}
limit_req_zone $1X_key zone=1X:1m rate=180r/m;

Basic Settings

client_max_body_size 0m;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

server_tokens off;

server_names_hash_bucket_size 64;

server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

Logging Settings

log_format formato_especial '$remote_addr - $remote_user [$time_local]
“$request” ’
'$status $body_bytes_sent “$http_referer” ’
‘“$http_user_agent” “$http_x_real_ip”
“$http_x_forwarded_for”’;

access_log off;
error_log /var/log/nginx/error.log;

Virtual Host Configs

include /etc/nginx/conf.d/*.conf;

server {
limit_req zone=1X burst=300;

limit_req_log_level error;
listen 80;
server_name api.acme.com https-api.acme.com services.acme.com

https-services.acme.com;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8080/;
}
}

}

Hello!

On Wed, Feb 12, 2014 at 02:39:41PM -0500, Jack Andolini wrote:

  • Nginx 1.4.3 receives the requests, and then derives them to Tomcat 7.0.33
    client: 58.81.5.31, server: api.acme.com, request: "GET
    Obviously I’m running out of some kind of resource (ports?). If I wait for
    root@ip-10-41-156-142:~# sysctl net.ipv4.ip_local_port_range
    net.ipv4.ip_local_port_range = 32768 61000

I have read some postings about configuring ports in order to get rid of
this error message, but I don’t know if that is my problem.

You’ve run out of local ports due to sockets in TIME-WAIT state.
There are more than one possible solution, including using lower
MSL, using bigger local port range, using keepalive connections,
using unix sockets to connect to backends and so on.

Simpliest solution would be to enable TIME-WAIT sockets reuse,
with net.ipv4.tcp_tw_reuse (or net.ipv4.tcp_tw_recycle):

sysctl net.ipv4.tcp_tw_reuse=1


Maxim D.
http://nginx.org/