Nginx high active connections

We have couple of servers. nginx takes the web requests

Active connections in the range of 400-450 is okay

but sometimes it goes to like 4,000. at this point in time. This is
abnormal

What does this mean? Is it that nginx is not able to take the load?

Or is it that the DB Server behind nginx is taking time? Or is it that
the Ubuntu server is not network optimized? Or is it that the RAM is
low?

Tahseen

Posted at Nginx Forum:

Active connections in the range of 400-450 is okay
but sometimes it goes to like 4,000. at this point in time. This is abnormal

What does this mean? Is it that nginx is not able to take the load?

It is rarely the webserver itself which can’t handle the
load/connections (having even 10k connections is no big deal) but it is
hard to get to any conclusions and/or give any solutions knowing only 2
numbers - 400 / 4000. One needs at least some application
specifics (is there a dynamic language involved (php/python/…) or only
static files are being served / are there any db backends
involved / does the undelaying filesystem can handle the load (no wa?)
etc etc?).

Just out of experience 2 cases come into mind - it’s either slowdown of
the application - single request starts to take up too much
server time and then the rest just pile up.
Other possibility is some sort of DDOS (but I would put this as less
likely case).

First thing to do is to check the logs:

  • check the nginx error log if there are notification (of file
    descriptors running out of workers being too few / upstream servers
    not responding)

  • if you use php (with fpm) check the error_log too - lines like ‘[pool
    www] seems busy (you may need to increase pm.start_servers,
    or pm.min/max_spare_servers)’ might indicate that there are sudden
    changes in request patterns or execution speed and all worker
    childs are busy. It’s also useful to enable set request_slowlog_timeout
    so php logs any request taking too much time to execute.

  • if you use any DBs check those logs too (like MySQL .err log or best
    if you have a slow query log enabled ( log_slow_queries /
    long_query_time = 2 ))

There is no magical quick fix or answer to your question :slight_smile:

rr

So situation is this. nginx takes the incoming request and then passes
it on to Tomcat, which is running a servlet. The servlet responses back
to nginx and at the same time does logging of various request
information via JMS

Now nginx definitely is capable of handling high requests.

I increased Tomcat threadpool value from 500 to 2,000 and made heapsize
4GB

JMSMaxConnections was set to 10 and JMSMaxActiveConnections was set to
5

I made them 100 and 50 respectively. After that it drastically reduced
nginx active connections

But my questions is have I done justice to JMS connection values? Aren’t
they suppose to be more? LIke 500 or so

Posted at Nginx Forum: