Ngx_http_upstream_keepalive

Hello!

ñ ×Ó£-ÔÁËÉ ÓÄÅÌÁÌ ÏÂÅÝÁÎÎÙÊ ÍÏÄÕÌØ ÄÌÑ ÐÏÄÄÅÒÖÁÎÉÑ ÐÏÓÔÏÑÎÎÙÈ
ÓÏÅÄÉÎÅÎÉÊ Ë ÂÅËÅÎÄÁÍ. ÷ ÎÁÓÔÏÑÝÉÊ ÍÏÍÅÎÔ ÜÔÏ ÉÍÅÅÔ ÓÍÙÓÌ
ÔÏÌØËÏ ÄÌÑ ÓÏÅÄÉÎÅÎÉÊ Ë memcached’Õ. ó http É fastcgi ÍÏÖÎÏ
ÄÁÖÅ É ÎÅ ÐÒÏÂÏ×ÁÔØ, ÎÉÞÅÇÏ ÈÏÒÏÛÅÇÏ ÎÅ ÐÏÌÕÞÉÔÓÑ.

îÁÓÔÒÁÉ×ÁÅÔÓÑ ËÁË-ÔÏ ÔÁË:

upstream memd {
    server  127.0.0.1:11211;
    server  127.0.0.1:11212;
    ...
    keepalive 10;
}

ðÏÓÌÅ ÞÅÇÏ Ë ÓÅÒ×ÅÒÁÍ ÂÕÄÅÔ ÐÏÄÄÅÒÖÉ×ÁÔØÓÑ ÄÏ 10 ÐÏÓÔÏÑÎÎÙÈ
ÓÏÅÄÉÎÅÎÉÊ (× ÓÕÍÍÅ ÎÁ ×ÓÅÈ).

ôÅÏÒÅÔÉÞÅÓËÉ ÍÏÄÕÌØ ÄÏÌÖÅÎ ËÏÒÒÅËÔÎÏ ÒÁÂÏÔÁÔØ Ó ÌÀÂÙÍÉ
ÂÁÌÁÎÓÉÒÏ×ÝÉËÁÍÉ (ÐÒÉ ÕÓÌÏ×ÉÉ ÞÔÏ ÏÎÉ ÁËÔÉ×ÉÒÏ×ÁÎÙ ÒÁÎØÛÅ ÞÅÍ
keepalive).

öÅÌÁÀÝÉÅ ÐÏÔÅÓÔÉÒÏ×ÁÔØ ÍÏÇÕÔ ÂÒÁÔØ ÔÕÔ:

http://mdounin.ru/hg/ngx_http_upstream_keepalive

ôÁÍ ÖÅ ÅÓÔØ README Ó ËÒÁÔËÉÍ ÒÕËÏ×ÏÄÓÔ×ÏÍ.

Maxim D.

p.s. ó nginx 0.6.* ÎÅ ÓÏÂÅÒ£ÔÓÑ, ÎÕÖÅÎ 0.7.*.

Hello!

On Fri, Oct 24, 2008 at 07:22:16PM +0400, Maxim D. wrote:

Hello!

Oops, sorry, it was intended for nginx-ru@.

In short: it’s keepalive upstream balancer module, it may be used
to keep connections to memcached alive.

It won’t work with http or fastcgi backends, don’t even try.

More details here:

http://mdounin.ru/hg/ngx_http_upstream_keepalive/raw-file/tip/README

Mercurial repository may be found here:

http://mdounin.ru/hg/ngx_http_upstream_keepalive

Maxim D.

On Fri, Oct 24, 2008 at 07:42:12PM +0400, Maxim D. wrote:

In short: it’s keepalive upstream balancer module, it may be used
to keep connections to memcached alive.

Cool! :slight_smile:

It won’t work with http or fastcgi backends, don’t even try.

Is there any fundamental problem with supporting e.g. HTTP? Apart from
not breaking the HTTP spec and sending Connection: keepalive (or
whatever it looks like for HTTP/1.0)?

Best regards,
Grzegorz N.

On Fri, Oct 24, 2008 at 08:21:29PM +0400, Maxim D. wrote:

On Fri, Oct 24, 2008 at 05:57:40PM +0200, Grzegorz N. wrote:

On Fri, Oct 24, 2008 at 07:42:12PM +0400, Maxim D. wrote:

In short: it’s keepalive upstream balancer module, it may be used
to keep connections to memcached alive.

Cool! :slight_smile:

Feel free to report bugs / success stories. :slight_smile:

I don’t actually use Memcached with Nginx but I am interested in
keepalive HTTP/FastCGI connections. It’s a great start.

Is there any fundamental problem with supporting e.g. HTTP? Apart from
not breaking the HTTP spec and sending Connection: keepalive (or
whatever it looks like for HTTP/1.0)?

Not really fundamental. But this will at least require nginx patching.

Couldn’t the header get injected in a filter? Or are other changes
required?

I’ve already started some work in this direction (and posted some
patches here), but it’s still require more work to be done.

Will have a look at your code and hopefully contribute something useful
ora at least steal some good stuff :slight_smile:

Best regards,
Grzegorz N.

Hello!

On Fri, Oct 24, 2008 at 06:34:00PM +0200, Grzegorz N. wrote:

I don’t actually use Memcached with Nginx but I am interested in
keepalive HTTP/FastCGI connections. It’s a great start.

Is there any fundamental problem with supporting e.g. HTTP? Apart from
not breaking the HTTP spec and sending Connection: keepalive (or
whatever it looks like for HTTP/1.0)?

Not really fundamental. But this will at least require nginx patching.

Couldn’t the header get injected in a filter? Or are other changes
required?

Request to upstream is created in ngx_http_proxy_module, so no
filters there.

You may try to use proxy_set_header though, and then use
proxy_hide_header to filter out Keep-Alive header from the
response. It may even work - if backend handle HTTP/1.0 keepalive
connections and won’t try to sent chunked encoding to nginx.

But in fact nginx should be modified to support HTTP/1.1 to
backends.

For FastCGI it should be as simple as not setting appropriate close
bit in request created by ngx_http_fastcgi_module (and using my
patches for connection closing), but I’ve not checked it yet.

Maxim D.

On Fri, Oct 24, 2008 at 07:42:12PM +0400, Maxim D. wrote:

In short: it’s keepalive upstream balancer module, it may be used
to keep connections to memcached alive.

Great job! I am going to start testing this today on a site that is
doing about 1 billion hits a month exclusively from memcached. I will
let you know how it fares :slight_smile:

Cheers
Kon

Hello!

On Fri, Oct 24, 2008 at 05:57:40PM +0200, Grzegorz N. wrote:

On Fri, Oct 24, 2008 at 07:42:12PM +0400, Maxim D. wrote:

In short: it’s keepalive upstream balancer module, it may be used
to keep connections to memcached alive.

Cool! :slight_smile:

Feel free to report bugs / success stories. :slight_smile:

It won’t work with http or fastcgi backends, don’t even try.

Is there any fundamental problem with supporting e.g. HTTP? Apart from
not breaking the HTTP spec and sending Connection: keepalive (or
whatever it looks like for HTTP/1.0)?

Not really fundamental. But this will at least require nginx patching.

I’ve already started some work in this direction (and posted some
patches here), but it’s still require more work to be done.

Maxim D.

On pią, paź 24, 2008 at 08:56:28 +0400, Maxim D. wrote:

Request to upstream is created in ngx_http_proxy_module, so no
filters there.

Right. But it was worth asking :wink:

You may try to use proxy_set_header though, and then use
proxy_hide_header to filter out Keep-Alive header from the
response. It may even work - if backend handle HTTP/1.0 keepalive
connections and won’t try to sent chunked encoding to nginx.

That was my idea too but I thought about encapsulating it somehow so
that the keepalive support would be transparent.

But in fact nginx should be modified to support HTTP/1.1 to
backends.

True. It can be a pain especially when your backend is a stupid embedded
device that only talks pidgin HTTP/1.1.

For FastCGI it should be as simple as not setting appropriate close
bit in request created by ngx_http_fastcgi_module (and using my
patches for connection closing), but I’ve not checked it yet.

I think keepalive support would fit best in Nginx core (not as an
external module) with some infrastructure to support it. Consumers of
the upstream functionality (memcached, fastcgi, proxy) could provide a
“want keepalive” flag which would mean that they are aware of keepalive
and handle it at the protocol level.

As Nginx is as much a proxy as it is a web server, maybe it makes sense
to make the upstream layer stackable, like:

  • session affinity support (or something; influences peer choice, can
    skip the real load balancer)
  • the real load balancer (ip_hash, rr, fair; influences peer choice)
  • keepalive support (does not influence peer choice–usually)

We could also e.g. pass the request data to the load balancer while
we’re at it so it can be a bit smarter (e.g. POSTs to a single “master”
backends, GETs balanced over a pool of “slaves”, a’la databases). The
common cases could then be simpler and faster than handcrafting a config
file.

Best regards,
Grzegorz N.

Hello!

On Fri, Oct 24, 2008 at 07:28:10PM +0200, Grzegorz N. wrote:

That was my idea too but I thought about encapsulating it somehow so
that the keepalive support would be transparent.

BTW, this anyway won’t work without connection close patches I’ve
posted a while ago.

[…]

For FastCGI it should be as simple as not setting appropriate close
bit in request created by ngx_http_fastcgi_module (and using my
patches for connection closing), but I’ve not checked it yet.

I think keepalive support would fit best in Nginx core (not as an
external module) with some infrastructure to support it.

I’ve did it in separate module since it’s much more managable to
have this code in separate file. It’s in Igor hands to
incorporate this module into nginx core if he decides to.

And actually there are unfinished keepalive code in rr balancer
(and I actually have this code finished and working here),
but it doesn’t look like good direction to me - since there are
other balancers, and they need keepalive too.

Consumers of
the upstream functionality (memcached, fastcgi, proxy) could provide a
“want keepalive” flag which would mean that they are aware of keepalive
and handle it at the protocol level.

As Nginx is as much a proxy as it is a web server, maybe it makes sense
to make the upstream layer stackable, like:

The keepalive module is an example that demonstrates that upstream
layer is stackable.

Maxim D.

More details here:

http://mdounin.ru/hg/ngx_http_upstream_keepalive/raw-file/tip/README

Mercurial repository may be found here:

ngx_http_upstream_keepalive: log

Maxim D.

Do you have a fully memcached example ? How can I call the memd location
?
memcached_pass doesn’t support upstream directive.

Thanks in advance

Hello!

On Mon, Jan 26, 2009 at 07:34:00PM +0100, Chavelle V. wrote:

Do you have a fully memcached example ? How can I call the memd location
?
memcached_pass doesn’t support upstream directive.

It does. Try something like this:

upstream memd {
server 127.0.0.1:11211;
keepalive 1;
}

server {

location / {
memcached_pass memd;
}
}

For more config examples you may try reading test code here:
http://mdounin.ru/hg/ngx_http_upstream_keepalive/file/tip/t/memcached-keepalive.t

Maxim D.