High Load configuration question

Hello,

I’ve been playing around with writing an nginx module and trying to
configure it to run at high load (testing with the curl-loader tool). My
bench mark is the http-static-module, that is, I want to run at least as
much load on my module as the static module can without errors. I’m also
(for now) keeping the default number of worker processes (1) and worker
connections (1024), but more on that later.

Currently, using curl-loader, I can send requests/recv responses off to
the ngx_http_static_module at a rate of 5000-7000 requests per second
over a period of several minutes, all with 200 OK responses.

With my module, I usually manage to get up to about 1000 requests per
second with all 200 OK responses.

Then, pass that threshold, I start to see “Err Connection Time Out”
problems cropping up in the curl-loader log. Usually there will be long
blocks (maybe 20 or so) of them, then I’ll go back to 200
OK’s. The rates are still good, maybe 200 time outs out of 100,000
connections, but I’m wondering why they aren’t perfect like the
http-static-module.

The only real difference I can see between my module and the static
module is the time it takes to generate the response (I’ve set the test
up so that they return the same amount of data, ~5K, however my module
does do other memory allocations for processing).

I used gettimeofday to try and get microsecond resolution on the time it
takes to generate a response.
With the static module, I see about 20-50 microseconds on average to
generate a response. My module, which has to do more processing, takes
on average 60-260 microseconds to generate its response. The pattern
seems to start on the lower side, get larger, then go back to the lower
side, but this isnt’ exact. Note that in both cases though, I
occasionally see randomly high times (like 15000 microseconds), however,
this doens’t correspond to the number of timeouts I see in curl loader
(indeed, I get this even for that static module, which doens’t time out)
.

So I tried simply adding a delay with usleep into the static module, and
sure enough, I started seeing time out errors cropping up with the
static module. So it seems the number of time outs is (roughly)
proportional to the time it takes to generate the response.

But I’m still not clear on why nginx is sending time outs at all. That
is, if it takes longer to generate the response, shouldn’t it just take
longer to send to response? Is there a configurable value somewhere
that’s causing nginx to send a time out? What effect does the number of
worker processes and connections have? I have curl-loader set to have no
limit on completion time (which I believe is the default), so I don’t
think it’s what’s causing the time outs, but I’m not sure (there is
nothing in nginx’s error.log when I get a time out).

I can indeed increase the number or worker processes/connections to get
better throughput with my module, but it takes more dramatic increases
then I would expect. E.g. 40 processes and 4000 connections or so let me
run 1400 connections/second on my module without errors. This helped
bring the processing time down to about 60-140 microseconds. But it
seems there should be a better way to achieve this throughput without
using that many resources.

Any advice you might have would be helpful. One specific thing I’m
wondering is if I’m being too liberal with my use of ngx_palloc/calloc,
and that might be slowing things down? I.e. might explicit frees of the
memory when its done help? But any other ideas would be great too.

Thanks, and have a good day!

Posted at Nginx Forum:

But I’m still not clear on why nginx is sending time outs at all. That is,
if it takes longer to generate the response, shouldn’t it just take longer
to send to response? Is there a configurable value somewhere that’s
causing nginx to send a time out?

I’m pretty sure that nginx doesn’t send “time outs” :wink:

E.g. 40 processes and 4000 connections or so let me run 1400
connections/second on my module without errors.

40 processes? Unless you’ve got 40 cores this is way too high.

Best regards,
Piotr Sikroa < [email protected] >

Piotr S. Wrote:

But I’m still not clear on why nginx is sending
time outs at all. That is,
if it takes longer to generate the response,
shouldn’t it just take longer
to send to response? Is there a configurable
value somewhere that’s
causing nginx to send a time out?

I’m pretty sure that nginx doesn’t send “time
outs” :wink:

So you think it has more to do with the over all architecture? I could
see that, but I’m still not sure what’s causing it just because it takes
longer to generate the response.

E.g. 40 processes and 4000 connections or so let
me run 1400
connections/second on my module without errors.

40 processes? Unless you’ve got 40 cores this is
way too high.

Tell me about it … I was just trying to see what it would take to stop
getting time outs. There’s gotta be a better way.

Best regards,
Piotr Sikroa < [email protected] >


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

So you think it has more to do with the over all architecture? I could see
that, but I’m still not sure what’s causing it just because it takes
longer to generate the response.

Longer response time == bigger backlog queue == bigger chances to drop
connection.

Best regards,
Piotr S. < [email protected] >

Hi there,

Thanks for your response, sorry it took me so long to reply.

  1. On FreeBSD you can try to increase kern.ipc.somaxconn. I bet 8192
    would be enough for your test. This is socket backbuffer which can cache
    connections before server actually accepts them

Currently, the OS I’m running on is Ubuntu 9.10, single core Intel with
1024 MB ram. But if I try a sysctl kern.ipc.somaxconn, it comes up as
unknown key.

  1. Try shed some light on your module internals. Right now I can only
    say that you certainly messed something inside main loop but can’t tell
    you what and why.

Here’s the basics of the module, you can also use a usleep simply to
slow things down, but I get time out errors even without that, and some
simple output sent. Note that I do a ngx_http_read_client_request_body
because I need access to the post body.

#include
#include
#include

static char* ngx_http_my_module(ngx_conf_t *cf, ngx_command_t *cmd, void
*conf);
static ngx_int_t ngx_http_my_module_handler(ngx_http_request_t *r);

static ngx_command_t ngx_http_my_module_init_commands[] = {
{ ngx_string(“my_module”),
NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS,
ngx_http_my_module,
NGX_HTTP_LOC_CONF_OFFSET,
0,
NULL },
ngx_null_command
};

ngx_http_module_t ngx_http_my_module_module_ctx = {
NULL, /* preconfiguration /
NULL, /
postconfiguration */

NULL,                                  /* create main configuration 

/
NULL, /
init main configuration */

NULL,                                  /* create server 

configuration /
NULL, /
merge server configuration
*/

NULL,                                  /* create location 

configuration /
NULL /
merge location
configuration */
};

ngx_module_t ngx_http_my_module_module = {
NGX_MODULE_V1,
&ngx_http_my_module_module_ctx, /* module context /
ngx_http_my_module_init_commands, /
module directives /
NGX_HTTP_MODULE, /
module type /
NULL, /
init master /
NULL, /
init module /
NULL, /
init process /
NULL, /
init thread /
NULL, /
exit thread /
NULL, /
exit process /
NULL, /
exit master */
NGX_MODULE_V1_PADDING
};

static void
ngx_http_my_module_post_handler(ngx_http_request_t *r);

static ngx_int_t
ngx_http_my_module_handler(ngx_http_request_t *r)
{

int rc;
rc = ngx_http_read_client_request_body(r, 

ngx_http_my_module_post_handler);

if (rc >= NGX_HTTP_SPECIAL_RESPONSE) {

return rc;
}
return NGX_DONE;
}

static void
ngx_http_my_module_post_handler(ngx_http_request_t *r)
{
int header_rc, rc;
ngx_chain_t *outputChain;
outputChain = ngx_pcalloc(r->pool, sizeof(ngx_chain_t));

outputChain->buf = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
outputChain->buf->pos = ngx_palloc(r->pool, 8);
outputChain->buf->start = outputChain->buf->pos;
outputChain->buf->memory = 1;
outputChain->buf->last_buf = 1;
ngx_memcpy(outputChain->buf->pos, "Testing", 7);
outputChain->buf->pos[7] = '\0';
outputChain->buf->last = outputChain->buf->pos + 7;

r->headers_out.content_length_n = 7;
r->headers_out.content_type.len = sizeof("text/plain") - 1;
r->headers_out.content_type.data = (u_char *) "text/plain";
r->headers_out.status = NGX_HTTP_OK;

header_rc = ngx_http_send_header(r);

rc = ngx_http_output_filter(r, outputChain);

ngx_http_finalize_request(r, rc);

return;

}

static char *
ngx_http_my_module(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
{

   ngx_http_core_loc_conf_t  *clcf;
clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
clcf->handler = ngx_http_my_module_handler;




return NGX_CONF_OK;

}

My nginx.conf file has:

worker_processes 1;
worker_connections 1024;

http {
include mime.types;
default_type application/octet-stream;
sendfile on;

server {
    listen       80;
    server_name  localhost;

location /somefile.html {
my_module;
}
}
}

Running curl-loader against that module, I can get around 3000/requests
per second, but with Connection Time out errors.

However, if I copy and paste that same code into the “static” module
(simply change my_module to static), and run it, I get around
6000/requests per second and with NO connection time out errors.

So how and why is my module being treated any differently then the
static module? I don’t see why mine can’t run at the same speed as it,
even where there code is pretty much a copy and paste of one another,
and the configurations/systems are identical? I can add usleeps to the
static module and get time outs, but my module seems to get them even
without it.

If it helps too, the connection time outs are usually in long blocks,
e.g. 20000 ok’s followed by 50 time outs, followed by thousand more
okay’s, etc…

Thanks!

Posted at Nginx Forum:

This all sounds like your module is taking focus and computes something
for a long time (more than 10 msec) without giving focus back. This way
when buffers are full there’s no one to pick connections from them so
any connection will time out without response. It’s just because your
module is written so.

That makes sense. How do I make it give back focus while still computing
it’s response? Is there an ngx_give_focus, ngx_release_focus or
something somewhere?

Thanks.

Posted at Nginx Forum:

  1. On FreeBSD you can try to increase kern.ipc.somaxconn. I bet 8192
    would be enough for your test. This is socket backbuffer which can cache
    connections before server actually accepts them.

Ahhhh…on Ubuntu it’s net.core.somaxconn, for which mine was only set
to 128.

I didn’t even have to go that high, just up to the number of
simultaneous connections I was specifying in curl-loader, between
500-1000, to get 2500 reqs/second (which I’d say is about what my module
should be able to handle given the time it takes to generate the
response) with no time outs and all 200 OK responses.

Hopefully if I want more then that, I can set up multiple back-ends and
reverse proxy them, which is why I was interested in nginx in the first
place.

Still, I am curious as to what you meant by “your module is taking focus
and not giving it back”, also if anyone else has something to add feel
free to chime in!

Thanks.

Posted at Nginx Forum:

One more thing:

I can add usleeps to the static module and get time outs, but my module seems to get them even without it.

They don’t always occur in the static module, even with a usleep, and at
a much slower rate even when they do. So it’s still doing something I
don’t understand, even though I can copy and paste the above code into
it and run it!

Posted at Nginx Forum: