Trying to Understand Upstream Keepalive

I’m trying to better wrap my head around the keepalive functionality in
the
upstream module as when enabling keepalive, I’m seeing little to no
performance benefits using the FOSS version of nginx.

My upstream block is:

upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; }

With a proxy block of:

proxy_set_header X-Forwarded-For $IP;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection “”;
proxy_pass http://upstream_test_1;

  1. How can I tell whether there are any connections currently in the
    keepalive pool for the upstream block? My origin server has keepalive
    enabled and I see that there are some connections in a keepalive state,
    however not the 50 defined and all seem to close much quicker than the
    keepalive timeout for the backend server. (I am using the Apache server
    status module to view this which is likely part of the problem)

  2. Are upstream blocks shared across workers? So in this situation,
    would
    all 4 workers I have shared the same upstream keepalive pool or would
    each
    worker have it’s own block of 50?

  3. How is the length of the keepalive determined? The origin server’s
    keepalive settings? Do the origin server’s keepalive settings factor in
    at
    all?

  4. If no traffic comes across this upstream for an extended period of
    time,
    will the connections be closed automatically or will they stay open
    infinitely?

  5. Are the connections in keepalive shared across visitors to the proxy?
    For
    example, if I have three visitors to the proxy one after the other,
    would
    the expectation be that they use the same connection via keepalive or
    would
    a new connection be opened for each of them?

  6. Is there any common level of performance benefit I should be seeing
    from
    enabling keepalive compared to just performing a proxy_pass directly to
    the
    origin server with no upstream block?

Thanks for any insight!

Posted at Nginx Forum:

Hello!

On Thu, May 08, 2014 at 03:12:44AM -0400, abstein2 wrote:

proxy_set_header X-Forwarded-For $IP;
status module to view this which is likely part of the problem)
As long as load is even enough, don’t expect to see many keepalive
connections on the backend - new connections will be only open if
there are no idle connections in the cache of a worker process.

  1. Are upstream blocks shared across workers? So in this situation, would
    all 4 workers I have shared the same upstream keepalive pool or would each
    worker have it’s own block of 50?

It’s per worker, see Module ngx_http_upstream_module.

  1. How is the length of the keepalive determined? The origin server’s
    keepalive settings? Do the origin server’s keepalive settings factor in at
    all?

Connections are kept in the cache till the origin server closes them.

  1. If no traffic comes across this upstream for an extended period of time,
    will the connections be closed automatically or will they stay open
    infinitely?

See above.

  1. Are the connections in keepalive shared across visitors to the proxy? For
    example, if I have three visitors to the proxy one after the other, would
    the expectation be that they use the same connection via keepalive or would
    a new connection be opened for each of them?

Connections in the cache are shared for all uses of the upstream.
As long as a connection is idle (and hence in the cache), it can
by used for any request by any visitor.

  1. Is there any common level of performance benefit I should be seeing from
    enabling keepalive compared to just performing a proxy_pass directly to the
    origin server with no upstream block?

No.

There are two basic cases when keeping connections alive is
really beneficial:

  • Fast backends, which produce responses is a very short time,
    comparable to a TCP handshake.

  • Distant backends, when a TCP handshake takes a long time,
    comparable to a backend response time.

There are also some bonus side effects (reducing number of sockets
in TIME-WAIT state, less work for OS to establish new connections,
less packets on a network), but these are unlikely to result in
measurable performance benefits in a typical setup.


Maxim D.
http://nginx.org/

Maxim,

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

I would like to know what is the keepalive timeout for this connection
pool?
Is it static?

Also i want to understand - if there is a marriage between number of
connections nginx gets vs how many it opens to upstream?

Thanks!

Posted at Nginx Forum:

Maxim D. Wrote:

connection pool?

Is it static?

As of now, there is no timeout on nginx side. Connections are
closed either by backends or if there isn’t enough room in
the cache.

So how long after a connection to upstream goes from ACTIVE to idle in
the
connection pool does it get closed?
There is not really much documentation on this upstream keepalive
component.

Also i want to understand - if there is a marriage between number of
connections nginx gets vs how many it opens to upstream?

This depends on how long it takes to process a request (as well as
various other factors). As long as backends are fast enough, one
connection to upstream may be enough to handle tens or hundreds of
client connections.

Ok.
What is the difference between ‘max_conns’ vs
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn
to
an upstream service ?


Maxim D.
http://nginx.org/


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

Hello!

On Tue, Oct 28, 2014 at 08:01:33PM -0400, newnovice wrote:

Maxim,

Module ngx_http_upstream_module

I would like to know what is the keepalive timeout for this connection pool?
Is it static?

As of now, there is no timeout on nginx side. Connections are
closed either by backends or if there isn’t enough room in
the cache.

Also i want to understand - if there is a marriage between number of
connections nginx gets vs how many it opens to upstream?

This depends on how long it takes to process a request (as well as
various other factors). As long as backends are fast enough, one
connection to upstream may be enough to handle tens or hundreds of
client connections.


Maxim D.
http://nginx.org/

“isn’t enough room in the cache.”

how big is the upstream keepalive connection-pool cache size?

Posted at Nginx Forum:

Hello!

On Wed, Oct 29, 2014 at 02:20:28PM -0400, newnovice wrote:

“isn’t enough room in the cache.”

how big is the upstream keepalive connection-pool cache size?

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive


Maxim D.
http://nginx.org/

Hello!

On Wed, Oct 29, 2014 at 01:15:56PM -0400, newnovice wrote:

There is not really much documentation on this upstream keepalive component.
That’s unspecified, see above.

Module ngx_http_limit_conn_module to
an upstream service ?

The “max_conns” parameter (only available in nginx-plus) limits
the number of active connections to an upstream server, while
limit_conn limits the number of active connections to a particular
location. This difference may be significant, for example, in the
following cases:

  • there are many upstream servers in a single upstream{} block;

  • some responses are returned from cache;

  • responses are large enough and clients are slow, so responses
    are buffered by nginx for a long time.


Maxim D.
http://nginx.org/

As stated in
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
that
“It should be particularly noted that the keepalive directive does not
limit
the total number of connections to upstream servers that an nginx worker
process can open. The connections parameter should be set to a number
small
enough to let upstream servers process new incoming connections as
well.” I
want to understand if a new client comes, why can’t they use existing
keep-alive connections? Do they need to create a new connection with
upstream?

Posted at Nginx Forum:

On Tuesday 14 June 2016 04:09:06 aanchalj wrote:

As stated in
Module ngx_http_upstream_module that
“It should be particularly noted that the keepalive directive does not limit
the total number of connections to upstream servers that an nginx worker
process can open. The connections parameter should be set to a number small
enough to let upstream servers process new incoming connections as well.” I
want to understand if a new client comes, why can’t they use existing
keep-alive connections? Do they need to create a new connection with
upstream?

[…]

It’s about the case when all of the existing keep-alive connections are
already
in use and processing other client requests.

wbr, Valentin V. Bartenev