This morning, our server (www.spellingcity.com) went down and I can’t
figure
out why. The Nginx log file has this error in it (all over the place
and is
growing):
2009/01/13 10:07:24 [alert] 26159#0: accept() failed (24: Too many open
files) while accepting new connection on 74.200.197.210:80
Which of course follows up with endless:
2009/01/13 10:14:56 [error] 26159#0: *13007 upstream timed out (110:
Connection timed out) while connecting to upstream, client:
131.109.51.3,
server: www.spellingcity.com, request: “GET / HTTP/1.0”, upstream:
“fastcgi://127.0.0.1:9000”, host: “www.spellingcity.com”
On Tue, Jan 13, 2009 at 11:15:52AM -0500, Ilan B. wrote:
This morning, our server (www.spellingcity.com) went down and I can’t figure
out why. The Nginx log file has this error in it (all over the place and is
growing):
2009/01/13 10:07:24 [alert] 26159#0: accept() failed (24: Too many open
files) while accepting new connection on 74.200.197.210:80
Thanks for the fast response. Our site is back up :-). Our tech support
(dedicated server support) did something to fix this issue, I will find out
later what. I’ll keep an eye on the open files as we currently have it set
pretty high.
I remember Zed S. talking about such issue back in the days when
people were running Rails through fastcgi. It had something to do with
keep alive connections. The connections would actually never close
themselves.
at first you have to separate the layer where you have the problem.
during
the last week i made a small test with nginx to able to reach the 50K
req/s
on a single host using CentOS and nginx
Thanks for the fast response. Our site is back up :-). Our tech
support
(dedicated server support) did something to fix this issue, I will find
out
later what. I’ll keep an eye on the open files as we currently have it
set
pretty high.