Even with the following nginx builds for high traffic production
enviorments http://nginx-win.ecsds.eu/ It still seems like the worker_rlimit_nofile
is
to small by default on nginx start up so unless you know to add the
command
into your nginx config with some insanely high value to it your site
will
seem slow as hell.
I read on stackoverflow the following.
events {
worker_connections 19000; # It’s the key to high performance -
have a
lot of connections available
}
worker_rlimit_nofile 20000; # Each connection needs a filehandle (or
2
if you are proxying)
But in my config i set the following just to test
events {
worker_connections 16384;
multi_accept on;
}
worker_rlimit_nofile 9990000000;
And everything still works fine can anyone explain so i can understand
why
this value is so small in the first place ?
Well without a value everything is very very slow. With a value its
nice and fast.
Interesting to know, the Windows design and other portions scale
automatically between 4 API’s to deal with high performance while
offloading
that to multiple workers at the same time. This design is limitless but
some
baseline values have been set fixed because you need to start somewhere
before tuning runs and after all workers have settled down.
I don’t know how you would try to replicate this issue because i have
thousands upon thousands of files being accessed simultaneously without
me
setting that value insanely high pages and access to thing take 10
seconds
and more even timeouts was occurring but as soon as i set that value it
all
stops and everything seems to be running fast as its suppose to.
On unix, worker_rlimit_nofile does exactly what’s documented: it
calls setrlimit(RLIMIT_NOFILE) within worker process. This allows
to change OS-imposed limit without restarting nginx. And there is
no default in nginx itself - the default is set by OS and it’s
configuration.
Note well that the words “without restarting” is actually the
reason why this directive exists at all. If a restart isn’t a big
deal, then OS limit can be changed by native means (“ulimit -n”
and friends).
In official nginx on Windows, worker_rlimit_nofile does nothing.
Not sure if there is an equivalent limit on Windows at all.
Now i am clueless because i dropped keepalive requests i also dropped
any
send_timeout values.
And this is what my bandwidth output looks like its very jumpy when it
should not be and my page loads are very slow even on static files like
html, mp4, flv etc and considering its nginx that delievers them i am
very
sure nginx is the problem.
Something is very wrong with nginx i recon i am completely out of
connections avaliable and it is waiting for a connection to open to use.
Unless anyone knows what i could be needing to add to my config so that
while people are downloading/streaming videos that are upto 500mb in
size.
I already use limit_rate but with that my bandwidth output should not be
jumpy like in the picture it would be a straight smooth line like it
used to
be but with more traffic i recon my connection limit is reached ?
When i said “my bandwidth output looks like its very jumpy”. on a 1gig
per
second connection my output jumps up and down 10% (100mb) used then it
will
jump to like 40% (400mb) and it changes so much before when i had less
traffic it used to be a very stead and stable 400-500mb output and
hardly
ever changed so dramaticly.
In the following screenshot you will see me I/O usage from Nginx is
extremely high.
And i would like to add the only reason in that picture the nginx
processes
are using so much memory is because i set the “worker_connections
1900000;”
to a high value. I don’t know if it should use so much memory or if it
is
just wasting my system resources.
This is a disk IO issue, not running out of connections, setting 1900000
is
pointless, 16k is more then enough, no more then 2 workers per cpu, I
see 12
workers so do you have enough cpu’s to cover that?
Could it be possible my server slows down because all connections are
in use ?
No, it’s a recycling and auto-tuning issue as far as I can see, have you
determined at which value you noticed the difference or is this value
simply
a big number ?
Looking at the disk activity access to disk is using all your resources
not
nginx.
Here Photobucket | Make your memories fun!
you
see nginx itself is waiting for disk IO to complete, all processes are
doing
just about nothing other then waiting for the harddisk, the main waiting
issue looks like it is writing to disk which isn’t going fast enough.
It all depends what you are writing, too small blocksize, many seeks,
onboard diskcache not working (writeback). Run some disk benchmarks to
see
what your storage is capable of and compare that to how much data your
attempting to write. At the moment your disks are not keeping up with
the
amount of write requests.