This may be a bit difficult to explain but I will try my best:
We have the following setup:
[ BOX 1 : NGINX Frontend ] —reverse-proxy—> [ BOX 2: NGINX Backend
—>
PHP-FPM ]
Upstream keepalives are enabled on BOX 1 as follows:
upstream backend{
server 1.2.3.4;
keepalive 512;
}
Keepalives are enabled on BOX 2 as follows:
keepalive_timeout 86400s;
keepalive_requests 10485760;
Yes, really high values… BOX 2 never sees any external traffic. Its
all
coming just from the front end (BOX1).
We have noticed, sometimes BOX 2 will return a Connection: close header,
and
leave the connection in TIME_WAIT state EVEN THO the request came with a
Connection: keep-alive header. This is correct behavior if BOX 2 wanted
to
close the connection… But why would it want to?
We have sniffed this info via netstat AND ngrep.
We are 100% sure BOX 2 sometimes sends back a Connection: close header,
and
the connection is left in a TIME_WAIT state. When we run the ngrep
utility
and watch netstat -na | grep TIME_WAIT | grep BOX1IP | etc etc etc… As
soon as a connection:close is sent, the count of TIME_WAIT sockets
increases.
So to summarize:
In what situations would nginx dispatch a Connection: close to a client
who
makes a request with Connection: keep-alive.
Worth nothing:
Generally, the upstream keep alives work… There is a high number of
reqs /
sec happening. These connection: close events happen rarely, but
frequently
enough to warrant this lengthy post
Thanks!
Posted at Nginx Forum: