Upstream keepalive - call for testing

Hello!

JFYI:

Last week I posted patch to nginx-devel@ which adds keepalive
support to various backends (as with upstream keepalive module),
including fastcgi and http backends (this in turn means nginx now
able to talk HTTP/1.1 to backends, in particular it now
understands chunked responses). Patch applies to 1.0.5 and 1.1.0.

Testing is appreciated.

You may find patch and description here:

http://mailman.nginx.org/pipermail/nginx-devel/2011-July/001057.html

Patch itself may be downloaded here:

http://nginx.org/patches/patch-nginx-keepalive-full.txt

Upstream keepalive module may be downloaded here:

http://mdounin.ru/hg/ngx_http_upstream_keepalive/
http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz

Maxim D.

Hi

compile error on ubuntu 11.04

gcc --version
gcc (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2
Copyright © 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is
NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE.

configure:
./configure --with-cc-opt="-D NGX_UPSTREAM_KEEPALIVE_PATCHED"
–add-module=ngx_http_upstream_keepalive-0.4

compile error:
src/http/modules/ngx_http_memcached_module.c:411:19: error: comparison
between signed and unsigned integer expressions

On 1 Ago 2011 17h07 WEST, [email protected] wrote:

Testing is appreciated.

ngx_http_upstream_keepalive: log
http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz

So either we use the patch or use the module. Correct?

Thanks,
— appa

Hello!

On Tue, Aug 02, 2011 at 09:37:12PM +0800, liseen wrote:

configure:
./configure --with-cc-opt=“-D NGX_UPSTREAM_KEEPALIVE_PATCHED”
–add-module=ngx_http_upstream_keepalive-0.4

compile error:
src/http/modules/ngx_http_memcached_module.c:411:19: error: comparison
between signed and unsigned integer expressions

Thank you for report. You may grab updated patch here:

http://nginx.org/patches/patch-nginx-keepalive-full-2.txt

Maxim D.

able to talk HTTP/1.1 to backends, in particular it now
http://nginx.org/patches/patch-nginx-keepalive-full.txt
[email protected]
nginx Info Page

On Wed, Aug 3, 2011 at 1:36 AM, Maxim D. [email protected] wrote:

Last week I posted patch to nginx-devel@ which adds keepalive

So either we use the patch or use the module. Correct?

No, to keep backend connections alive you need module and patch.
Patch provides foundation in nginx core for module to work with
fastcgi and http.

With a custom nginx upstream binary protocol, I believe multiplexing
will
now be possible?

Hello!

On Wed, Aug 03, 2011 at 01:42:13AM +0800, David Yu wrote:

JFYI:

So either we use the patch or use the module. Correct?

No, to keep backend connections alive you need module and patch.
Patch provides foundation in nginx core for module to work with
fastcgi and http.

With a custom nginx upstream binary protocol, I believe multiplexing will
now be possible?

ENOPARSE, sorry.

Maxim D.

Hello!

On Tue, Aug 02, 2011 at 04:24:45PM +0100, António P. P. Almeida wrote:

understands chunked responses). Patch applies to 1.0.5 and 1.1.0.

Upstream keepalive module may be downloaded here:

ngx_http_upstream_keepalive: log
http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz

So either we use the patch or use the module. Correct?

No, to keep backend connections alive you need module and patch.
Patch provides foundation in nginx core for module to work with
fastcgi and http.

Maxim D.

On Wed, Aug 3, 2011 at 1:50 AM, Maxim D. [email protected] wrote:

On 1 Ago 2011 17h07 WEST, [email protected] wrote:

fastcgi and http.

With a custom nginx upstream binary protocol, I believe multiplexing will
now be possible?

ENOPARSE, sorry.

After some googling …
ENOPARSE is a nerdy term. It is one of the standard C library error
codes
that can be set in the global variable “errno” and stands for Error No
Parse.
Since you didn’t get it, I can thus conclude that unlike me you are
probably
a normal, well adjusted human being :wink:

Now I get it. Well adjusted I am.

Hello!

On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu wrote:

On Tue, Aug 02, 2011 at 04:24:45PM +0100, António P. P. Almeida wrote:

able to talk HTTP/1.1 to backends, in particular it now

No, to keep backend connections alive you need module and patch.
that can be set in the global variable “errno” and stands for Error No
Parse. Since you didn’t get it, I can thus conclude that unlike me you are
probably
a normal, well adjusted human being :wink:

Actually, this definition isn’t true: there is no such error code,
it’s rather imitation. The fact that author of definition claims
it’s real error indicate that unlike me, he is normal, well
adjusted human being. :wink:

Now I get it. Well adjusted I am.

Now you may try to finally explain what you mean to ask in your
original message. Please keep in mind that your are talking to
somebody far from being normal and well adjusted. :wink:

Maxim D.

p.s. Actually, I assume you are talking about fastcgi
multiplexing. Short answer is: no, it’s still not possible.

On Wed, Aug 3, 2011 at 2:47 AM, Maxim D. [email protected] wrote:

On Wed, Aug 3, 2011 at 1:36 AM, Maxim D. [email protected]

fastcgi and http.
Parse. Since you didn’t get it, I can thus conclude that unlike me you
Now you may try to finally explain what you mean to ask in your
original message. Please keep in mind that your are talking to
somebody far from being normal and well adjusted. :wink:

Maxim D.

p.s. Actually, I assume you are talking about fastcgi
multiplexing.

Nope not fastcgi multiplexing. Multiplexing over a custom/efficient
nginx
binary protocol.
Where requests sent to upstream include a unique id w/c the upstream
will
also send on response.
This allows for asychronous, out-of-bands, messaging.
I believe this is what mongrel2 is trying to do now … though as an
http
server, it is nowhere near as robust/stable as nginx.
If nginx implements this (considering nginx already has a lot of market
share), it certainly would bring more developers/users in (especially
the
ones needing async, out-of-bands request handling)

Short answer is: no, it’s still not possible.

Hi

Could nginx keepalive work with HealthCheck? Does Maxim D. have a
support plan?

Hi,

I use nginx 0.8.54 together with latest keepalive module and ajp module
to build proxy for java runtime, and notice when “accept mutax” is on,
the number of upstream connections is 10 to 20 times of that when
“accept mutax” is off.

The test environment is 2 CPU cores, 4G RAM.

Nginx conf:

worker_processes 2;

http {
upstream java_server {
server 127.0.0.1:8009 srun_id=jvm1;
keepalive 500 single;
}

server {
    listen              80 default;
    server_name        xxx.xxx.xxx

    location /index.jsp {
        ajp_intercept_errors   on;
        ajp_hide_header      X-Powered-By;
        ajp_buffers          16 8k;
        ajp_buffer_size       8k;
        ajp_read_timeout     30;
        ajp_connect_timeout  20;
        ajp_pass            java_server;
    }
}

}

I use ab -c 100 -n 50000 XXXX to simulate visit, and netstat -an | grep
8009 -c to see how many connections to upstream is built.

when “accept mutax” is on, there is 28473 connections to upstream, but
only 1674 connections when “accept mutax” is off.

If there is only on work process, the result is similar to “accept
mutax” is off.

Captures packets:

When “accept mutax” is off, It is clear that it is upstream server who
closes the connection:
2169 0.215060 127.0.0.1 127.0.0.1 TCP 41621 >
8009 [ACK] Seq=1333 Ack=4654 Win=42496 Len=0 TSV=1513155905
TSER=1513155905
12474 15.216063 127.0.0.1 127.0.0.1 TCP 8009 >
41621 [FIN, ACK] Seq=4654 Ack=1333 Win=42496 Len=0 TSV=1513159655
TSER=1513155905
12499 15.223667 127.0.0.1 127.0.0.1 TCP 41621 >
8009 [FIN, ACK] Seq=1333 Ack=4655 Win=42496 Len=0 TSV=1513159657
TSER=1513159655
12500 15.223672 127.0.0.1 127.0.0.1 TCP 8009 >
41621 [ACK] Seq=4655 Ack=1334 Win=42496 Len=0 TSV=1513159657
TSER=1513159657

When “accept mutax” is on, it comes to nginx:
5966 1.479476 127.0.0.1 127.0.0.1 TCP 54788 >
8009 [ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225689
TSER=1513225689
6008 1.483907 127.0.0.1 127.0.0.1 TCP 54788 >
8009 [FIN, ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225690
TSER=1513225689
6012 1.484331 127.0.0.1 127.0.0.1 TCP 8009 >
54788 [FIN, ACK] Seq=1035 Ack=298 Win=34944 Len=0 TSV=1513225690
TSER=1513225690
6013 1.484342 127.0.0.1 127.0.0.1 TCP 54788 >
8009 [ACK] Seq=298 Ack=1036 Win=34944 Len=0 TSV=1513225690
TSER=1513225690

I fix this by simply adding a test in the
ngx_http_upstream_keepalive_close_handler

static void
ngx_http_upstream_keepalive_close_handler(ngx_event_t *ev)
{
ngx_http_upstream_keepalive_srv_conf_t *conf;
ngx_http_upstream_keepalive_cache_t *item;

int                n;
u_char             buf;
ngx_connection_t  *c;

ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0,
               "keepalive close handler");

c = ev->data;
item = c->data;
conf = item->conf;

if ((n = c->recv(c, &buf, 1)) == 0) {
    ngx_queue_remove(&item->queue);
    ngx_close_connection(item->connection);
    ngx_queue_insert_head(&conf->free, &item->queue);
}

}

However, I never understand why it fires the event handler.

This email (including any attachments) is confidential and may be
legally privileged. If you received this email in error, please delete
it immediately and do not copy it or use it for any purpose or disclose
its contents to any other person. Thank you.

(κθ)ܺлϲܷɱȷռˣɾʼ벻ҪʽиƲκ;͸¶ʼ֮ݡлл

Hello!

On Wed, Aug 03, 2011 at 10:49:10AM +0800, 卫越 wrote:

I use nginx 0.8.54 together with latest keepalive module and ajp
module to build proxy for java runtime, and notice when “accept
mutax” is on, the number of upstream connections is 10 to 20
times of that when “accept mutax” is off.

Code you provided suggests you’ve not using latest keepalive
module but rather something like 0.2 Current version is 0.4.
Check similar to one your (but correct one) was added in upstream
keepalive module 0.3, see [1].

[1] ngx_http_upstream_keepalive: 9a4ee6fe1c6d

Maxim D.

Ive been testing this on my localhost and one of my live servers (http
backend) for a good week now, I haven’t had any issues that I have
noticed
as of yet.

Servers are Debian Lenny and Debian Squeeze (oldstable, stable)

Hoping it will make it into the developer (1.1.x) branch soon :slight_smile:

Hi,

I’m trying to use keepalive http connections for proxy_pass directives
containing variables.
Currently it only works for named upstream blocks.

I’m wondering what would be the easiest way,
maybe setting peer->get to ngx_http_upstream_get_keepalive_peer and
kp->original_get_peer to ngx_http_upstream_get_round_robin_peer()
towards
the end of ngx_http_create_round_robin_peer().
If I can figure how to set kp->conf to something sane this might work :slight_smile:

Thoughts ?

Thank you,
Matthieu.

Hello!

On Wed, Aug 03, 2011 at 05:06:56PM -0700, Matthieu T. wrote:

If I can figure how to set kp->conf to something sane this might work :slight_smile:

Thoughts ?

You may try to pick one from upstream’s main conf upstreams
array (e.g. one from first found upstream with init set to
ngx_http_upstream_init_keepalive). Dirty, but should work.

Maxim D.

Servers are Debian Lenny and Debian Squeeze (oldstable, stable)

On Wed, Aug 03, 2011 at 01:53:30AM +0800, David Yu wrote:

Testing is appreciated.

patch.

ENOPARSE is a nerdy term. It is one of the standard C library error


nginx mailing list
[email protected]
nginx Info Page

I have checked the codes in 0.3 and in additional “c->close” check in
0.4, it’s OK to me. I suggest the archive should always keep up with the
change log, and I have checked out that it is updated now.

Thank you.

-----邮件原件-----
发件人: [email protected] [mailto:[email protected]] 代表 Maxim
Dounin
发送时间: 2011年8月3日 15:38
收件人: [email protected]
主题: Re: upstream keepalive close connections actively

Hello!

On Wed, Aug 03, 2011 at 10:49:10AM +0800, 卫越 wrote:

I use nginx 0.8.54 together with latest keepalive module and ajp
module to build proxy for java runtime, and notice when “accept
mutax” is on, the number of upstream connections is 10 to 20
times of that when “accept mutax” is off.

Code you provided suggests you’ve not using latest keepalive
module but rather something like 0.2 Current version is 0.4.
Check similar to one your (but correct one) was added in upstream
keepalive module 0.3, see [1].

[1] ngx_http_upstream_keepalive: 9a4ee6fe1c6d

Maxim D.


nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx

Been testing this on my servers now for 2 days now, handling
approximately
100mbit of constant traffic (3x20mbit, 1x40mbit).

Havent noticed any large bugs, had an initial crash on one of the
servers however havent been able to replicate. The servers are a mixture
of
openvz, XEN and one vmware virtualised containers running debian lenny
or
squeeze,

Speed increases from this module are decent, approximately 50ms from the
request time and the HTTP download starts 200ms earler resulting in a
150ms
quicker load time on average.

all in all, seems good.

Thanks for all your hard work Maxim.

50ms per HTTP request (taken from firebug and chrome resource panel) as
the
time it takes the html to load from request to arrival.
200ms is the time saved by when the http starts transfering to me
(allowing
other resources to begin downloading before the HTML completes),
previously
the html only started transfering after the full request was downloaded
to
the proxy server (due to buffering)

HTTP to talk to the backends (between countries)

The node has a 30-80ms ping time between the backend and frontend.
(Russia->Germany, Sweden->NL, Ukraine->Germany/NL etc)

Hello!

On Mon, Aug 08, 2011 at 02:44:12PM +1000, SplitIce wrote:

Been testing this on my servers now for 2 days now, handling approximately
100mbit of constant traffic (3x20mbit, 1x40mbit).

Havent noticed any large bugs, had an initial crash on one of the
servers however havent been able to replicate. The servers are a mixture of
openvz, XEN and one vmware virtualised containers running debian lenny or
squeeze,

By “crash” you mean nginx segfault? If yes, it would be great to
track it down (either to fix problem in keepalive patch or to
prove it’s unrelated problem).

Speed increases from this module are decent, approximately 50ms from the
request time and the HTTP download starts 200ms earler resulting in a 150ms
quicker load time on average.

Sounds cool, but I don’t really understand what “50ms from the
request time” and “download starts 200ms earler” actually means.
Could you please elaborate?

And, BTW, do you use proxy or fastcgi to talk to backends?

Maxim D.

If I can figure how to set kp->conf to something sane this might work :slight_smile:
Thank you,

Hello!

On Wed, Aug 3, 2011 at 1:36 AM, Maxim D. <

able to talk HTTP/1.1 to backends, in particular it now

http://mdounin.ru/files/ngx_http_upstream_keepalive-0.4.tar.gz

With a custom nginx upstream binary protocol, I believe
Error No
Now I get it. Well adjusted I am.
Nope not fastcgi multiplexing. Multiplexing over a custom/efficient
share), it certainly would bring more developers/users in (especially

nginx Info Page


Warez Scene http://thewarezscene.org Free Rapidshare
Downloadshttp://www.nexusddl.com