Lots of 2009/09/27 00:49:21 22383#0: accept() failed (24: Too many open files)

Hello,

I’m using nginx to send static files and reverse proxing to an apache of
the
same server for serving PHP files.

I’m getting a large amount of 500 internal server errors and
Lots of
2009/09/27 00:49:21 [alert] 22383#0: accept() failed (24: Too many open
files)
in my nginx error logs when visitors peak.

nginx.conf includes
user apache apache;
worker_processes 12;
worker_connections 1024;
accessing stub_status says
Active connections: 625
server accepts handled requests
233130990 233130990 400847438
Reading: 8 Writing: 10 Waiting: 607

right now but was about Active connections was about 1200 when getting
the
errors

[root@firewall2 ~]# cat /proc/sys/fs/file-max
372684
[root@firewall2 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 77824
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 200000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 77824
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

I see no reason I should be bumping into the file open limits.

Anyway know why I’m getting the errors?
Thanks.

On Sat, Sep 26, 2009 at 8:59 AM, Jason K. [email protected]
wrote:

pending signals (-i) 77824
file locks (-x) unlimited

I see no reason I should be bumping into the file open limits.

Anyway know why I’m getting the errors?
Thanks.

I believe the ulimit is for the current user logged in with the active
session, you will want to modify the ulimit settings for the user that
runs your process

On Sat, Sep 26, 2009 at 5:59 PM, Jason K. [email protected]
wrote:

worker_processes 12;

Do you really have 12 cores? It doesn’t make much sense to have more
workers than cores

how is this related to his question? hoe does ur answer help him in
figuring out the issiue?

Jason, check ur ulimits for the user running the process, alsomale
sure to update /etc/security/limits so new changed take place at boot

Regards,

Payam Tarverdyan Chychi
Network Engineer

Sent from my iPhone

On 2009-09-26, at 11:02 AM, Dennis B. [email protected]

Actually I was running it with 4 with a higher number of
worker_connections,
but hoping the errors would go away, I had changed it to 12 a few days
ago
and am still getting the errors.
The server is quad core.

I am running CentOS 5.3, from what I know default limits for standard
users
file is 200000

Anyways to make sure I edited /etc/passwd to
apache:x:48:48:Apache:/var/www:/bin/bash

So I could login as apache (I set niginx.conf as user apache apache;)

and then as root typed
[root@firewall2 ~]# su apache

and then as apache typed
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 77824
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 200000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 77824
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Is there any reason I still should bump into file limits?

Thanks

odd, you shouldent unless there is a bug with cent os. I recently came
across an issiue where when writing/creating files on the disk
resulted in a hault like condition on centOS, as if the system was
hitting the open file limits. I tested the same hardware and exact
setup with deviant and the issiue went away… I could rwpliate the
issiue on demand. also, I know that centos has a bug in the way files
are written to disk, I’m no expert but if you google centOS slow HD
performance you’ll see it

one thing you can do is install/run ‘dstat’ it will show you all I/O
of disk and network/CPU calls

Regards,

Payam Tarverdyan Chychi
Network Engineer

Sent from my iPhone