Forum: NGINX To many open files...

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Ilan B. (Guest)
on 2009-01-13 18:28
(Received via mailing list)
This morning, our server (www.spellingcity.com) went down and I can't
figure
out why.  The Nginx log file has this error in it (all over the place
and is
growing):


2009/01/13 10:07:24 [alert] 26159#0: accept() failed (24: Too many open
files) while accepting new connection on 74.200.197.210:80

Which of course follows up with endless:

2009/01/13 10:14:56 [error] 26159#0: *13007 upstream timed out (110:
Connection timed out) while connecting to upstream, client:
131.109.51.3,
server: www.spellingcity.com, request: "GET / HTTP/1.0", upstream:
"fastcgi://127.0.0.1:9000", host: "www.spellingcity.com"


Help???


Thanks
Igor S. (Guest)
on 2009-01-13 18:51
(Received via mailing list)
On Tue, Jan 13, 2009 at 11:15:52AM -0500, Ilan B. wrote:

> This morning, our server (www.spellingcity.com) went down and I can't figure
> out why.  The Nginx log file has this error in it (all over the place and is
> growing):
>
>
> 2009/01/13 10:07:24 [alert] 26159#0: accept() failed (24: Too many open
> files) while accepting new connection on 74.200.197.210:80

This is OS limit that should be increased:
http://www.google.com/search?q=linux+number+of+open+files

> Which of course follows up with endless:
>
> 2009/01/13 10:14:56 [error] 26159#0: *13007 upstream timed out (110:
> Connection timed out) while connecting to upstream, client: 131.109.51.3,
> server: www.spellingcity.com, request: "GET / HTTP/1.0", upstream:
> "fastcgi://127.0.0.1:9000", host: "www.spellingcity.com"

This error may be associated with "Too many open files" as connecting
to localhost should be done very quickly.
Ilan B. (Guest)
on 2009-01-13 19:02
(Received via mailing list)
Thanks for the fast response.  Our site is back up :-).  Our tech
support
(dedicated server support) did something to fix this issue, I will find
out
later what.  I'll keep an eye on the open files as we currently have it
set
pretty high.
Thomas (Guest)
on 2009-01-14 14:45
(Received via mailing list)
On Tue, Jan 13, 2009 at 5:50 PM, Ilan B. <removed_email_address@domain.invalid>
wrote:
> Thanks for the fast response.  Our site is back up :-).  Our tech support
> (dedicated server support) did something to fix this issue, I will find out
> later what.  I'll keep an eye on the open files as we currently have it set
> pretty high.
>

I remember Zed S. talking about such issue back in the days when
people were running Rails through fastcgi. It had something to do with
keep alive connections. The connections would actually never close
themselves.
István Szukács (Guest)
on 2009-01-14 16:21
(Received via mailing list)
at first you have to separate the layer where you have the problem.
during
the last week i made a small test with nginx to able to reach the 50K
req/s
on a single host using CentOS and nginx

linux level:

/etc/security/limits.conf

* hard nofile 10000
* soft nofile 10000

It might solve your problem.

Regards,
Istvan
Dave C. (Guest)
on 2009-01-15 23:49
(Received via mailing list)
worker_rlimit_nofile 8192;
events {
     worker_connections  2048;
     use epoll;
}

Each fast CGI connection will use 2 file descriptors, one for the
client, one for the proxy.

http://wiki.codemongers.com/NginxHttpEventsModule#...
http://wiki.codemongers.com/NginxHttpMainModule#wo...

As others have advised, make sure your ulimit settings at the OS level
allow that many file descriptors.

man ulimit

Cheers

Dave
This topic is locked and can not be replied to.