No, it’s the number of file descriptors allowed for a process; I cannot imagine
4096 workers (and the swap size!)
Correct. That is what I meant. I am trying to set the environment so
that there are plenty of file descriptors available for nginx’s use.
1024 seems rather low for the heavy traffic we will be handling.
I can successfully use unlimit -n 65536 to set the limit higher, and
then launch nginx manually from the shell and nginx is happy and doesn’t
complain about the 1024 limit. However when my init.d script launches
nginx it still gives that warning about the limit of 1024.
http://wiki.codemongers.com/NginxMainModule#worker_rlimit_nofile
Thanks. I saw that but it does not solve the problem. I tried setting
it to like 35000 but it still gives that same warning.
Just to be clear, I am not trying to get rid of the warning by LOWERING
what nginx tries to use. Rather, I am trying to get rid of the warning
by RAISING the available file descriptors nginx can use from the
environment.
As I mentioned, it works just fine if I do the ulimit -n 65536 and
manually launch it. In that case it says:
[notice] 1984#0: getrlimit(RLIMIT_NOFILE): 65536:65536
which tells me it is working like I want.
However when this gets launched from my init.d script it gives the 1024
warning, despite me seemingly taking all the correct steps to set the
hard and soft file descriptor limits system wide to 65536. I would be
very grateful is someone can please shed light on what I need to change
to make sure nginx can have access to all 65536 file descriptors or at
least a LOT more than 1024. Thank you!
____________________________________________________________________________________
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
