I’m using nginx as reverse proxy to apache. Apache only serves php
because I had no time to set up fastcgi or php-fpm on this box.
Anyway, today there was sudden increase in nginx writing connections and
increase of apache processes. It already happened to me in past and
cause was the datacenter resolvers (at least switching to google
resolvers fixed it immediately). Today the change of resolvers does not
do anything.
It is strange, because now when I change resolvers it does not help. The
writing connections are still higher and apache processes as well.
Traffic is as usual. I will try to recompile newer stable build of nginx
today and maybe try to serve the website comletely from nginx.
Maybe the keepalives are causing the issue. If your keepalive timeout
is too high, connections that could be recycled to other users will
stay taken up, even though they are idle. Try reducing your keepalive
timeout to 1 second.
I updated nginx to latest stable and problem still here. resolvers work
properly but when I try to connect to the website I need to wait 3
seconds to get connected…any further surfing of the web is fast (I use
keepalive). And I cannot find out why this happened.
Here is what I found out: The cause is keepalive on vBulletin (nginx as
proxy for apache) server that is used for authentication from the main
web server I had troubles with. I debuged the application and found out
the 3 second wait at website loading is always equal to keepalive
settings on the vB server. I put 40 as keepalive timeout, then the
website waited 40 seconds to load, etc… therefore I turned off
keepalive on vB server and all loads instantly.
I do not understand this behavior though. :-? I will investigate further
and switch completely to nginx + fastcgiphp or php-fpm.
Posted at Nginx Forum:
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.