Unanswered requests from local host, high load

Dear list,

I’m running a public mirror server which currently handles about
5 GBit/sec with 15.000 concurrent connections and 1400 requests/second.
We only serve static files and basically all served files are in RAM
(e.g. the current Mozilla Firefox update).

In high load scenarios nginx sometimes fails to answer simple requests
in time, even when they originate from localhost. According to
SmokePing’s “HTTP ping”, which just GETs/HEADs the static index.html
file, the time needed for this answer changes dramatically and packet
loss is involved with up to 17% when the problems occur.

The Apache running in the background did not have this problem for
queries from localhost. When considering external queries (another
instance of SmokePing), I see gaps (100% packet loss) for both Apache
and nginx, where nginx also has packet loss up to 20% and higher
response times around that time.

If you want I can provide the four snippets of SmokePing graphs.

nginx 0.8.53, 64 workers, no keepalive, 5000 worker connections, epoll,
sendfile, tcp_nodelay
Linux 2.6.26 (Debian), max 131072 open files
two quadcore Xeons, 60 GByte DDR3, Intel 10 GBit NIC
10 TByte StorageTek disk backend

In case you need further information, I’m happy to provide it.

I just increased to 96 workers (as I already wanted, I accidently
reverted that config today), let’s see what happens…

Best regards,

I’m suprised you need so many workers.

Sent from my iPod

On Fri, Oct 29, 2010 at 02:27:45AM +1100, Splitice wrote:

I’m suprised you need so many workers.

I’m not sure if we need that many workers, I just have them. My current
idea is that we hit some limit (max open files?), because I just noticed
that both Apache and nginx did not accept/serve new connections for a
serious amount of time (two minutes or so).

Best regards,

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs