Nginx + my module crashes only when ignore client abort = on

i use nginx ver 1.2.5 (also tried 1.2.7) with my module that sends
subrequest to an upstream, waits untill response get back, then goes to
backend upstream and fetch the regular web page from it.
when i add to nginx conf “proxy_ignore_client_abort on;”, nginx crash
with
signal 11 (seg fault) when i do “ab” test and stop it in the middle of
the
(log: “client prematurely closed…” etc.).
when i cancel my subrequest - no crash
or: when i remove the proxy_ignore_client_abort (default off) - no
crash,
even with the subrequest.

here’s the core dump:

Program terminated with signal 11, Segmentation fault.
#0 0x000000000045926b in ngx_http_terminate_request (r=0x1964360, rc=0)
at
src/http/ngx_http_request.c:2147
2147 ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
(gdb) bt
#0 0x000000000045926b in ngx_http_terminate_request (r=0x1964360, rc=0)
at
src/http/ngx_http_request.c:2147
#1 0x0000000000458b98 in ngx_http_finalize_request (r=0x1964360, rc=0)
at
src/http/ngx_http_request.c:1977
#2 0x0000000000459d80 in ngx_http_test_reading (r=0x1964360) at
src/http/ngx_http_request.c:2443
#3 0x00000000004587fc in ngx_http_request_handler (ev=0x192b428) at
src/http/ngx_http_request.c:1866
#4 0x000000000043f1e0 in ngx_epoll_process_events (cycle=0x18f0470,
timer=59782, flags=1) at src/event/modules/ngx_epoll_module.c:683
#5 0x000000000042eea4 in ngx_process_events_and_timers
(cycle=0x18f0470) at
src/event/ngx_event.c:247
#6 0x000000000043ba36 in ngx_single_process_cycle (cycle=0x18f0470) at
src/os/unix/ngx_process_cycle.c:315
#7 0x000000000040a33a in main (argc=3, argv=0x7fff9e8fec18) at
src/core/nginx.c:409

the cause of the crash: it appears thet “r->connection->log” is null or
garbaged - its being freed before by nginx core !

please help
TNX,
Gad

Posted at Nginx Forum:

i attache here my “debug_http” log - note that “http finalize request”
is
called twice (i think that one of them nulls the connection so nulls its
log
too), and thats NOT happening when NOT using proxy_ignore_client_abort:

btw: i use proxy http version 1.1, if it helps


2013/03/14 17:49:23 [info] 29550#0: *55 client prematurely closed
connection
while sending request to upstream, client: 10.0.0.18, server:
www.aaa.com,
request: “GET / HTTP/1.0”, upstream: “http://x.x.x.x:80/”, host:
aaa.com
2013/03/14 17:49:23 [debug] 29550#0: *55 http finalize request: 0, “/?”
a:1,
c:1
2013/03/14 17:49:23 [debug] 29550#0: *55 http terminate request count:1
2013/03/14 17:49:23 [debug] 29550#0: *55 cleanup http upstream request:
“/”
2013/03/14 17:49:23 [debug] 29550#0: *55 finalize http upstream request:
-4
2013/03/14 17:49:23 [debug] 29550#0: *55 finalize http proxy request
2013/03/14 17:49:23 [debug] 29550#0: *55 free keepalive peer
2013/03/14 17:49:23 [debug] 29550#0: *55 free rr peer 3 0
2013/03/14 17:49:23 [debug] 29550#0: *55 close http upstream connection:
25
2013/03/14 17:49:23 [debug] 29550#0: *55 http finalize request: -4, “/?”
a:1, c:1
2013/03/14 17:49:23 [debug] 29550#0: *55 http lingering close handler
2013/03/14 17:49:23 [debug] 29550#0: *55 lingering read: 0
2013/03/14 17:49:23 [debug] 29550#0: *55 http request count:1 blk:0
2013/03/14 17:49:23 [debug] 29550#0: *55 http close request
2013/03/14 17:49:23 [debug] 29550#0: *55 http log handler
2013/03/14 17:49:23 [debug] 29550#0: *55 close http connection: 23

Hello!

On Thu, Mar 14, 2013 at 11:36:58AM -0400, gadh wrote:

i use nginx ver 1.2.5 (also tried 1.2.7) with my module that sends
subrequest to an upstream, waits untill response get back, then goes to
backend upstream and fetch the regular web page from it.
when i add to nginx conf “proxy_ignore_client_abort on;”, nginx crash with
signal 11 (seg fault) when i do “ab” test and stop it in the middle of the
(log: “client prematurely closed…” etc.).
when i cancel my subrequest - no crash
or: when i remove the proxy_ignore_client_abort (default off) - no crash,
even with the subrequest.

Description of the problem suggests there is something wrong with
request reference counting, likely caused by what your module
does. It’s very easy to screw it up, especially when trying to do
subrequests before the request body is received.

Hard to say anything else without the code.


Maxim D.
http://nginx.org/en/donation.html

Hello!

On Thu, Mar 14, 2013 at 12:46:43PM -0400, gadh wrote:

thanks
after i get the subrequest response in a handler function i registered, what
can i do in order to tell the ngin core the subrequest had finished ? in my
case i do only these actions:

ngx_http_core_run_phases(r->main);
return NGX_OK;

is this ok ?

No. This looks like completely wrong code, which may easily screw
up things.


Maxim D.
http://nginx.org/en/donation.html

thanks
after i get the subrequest response in a handler function i registered,
what
can i do in order to tell the ngin core the subrequest had finished ? in
my
case i do only these actions:

ngx_http_core_run_phases(r->main);
return NGX_OK;

is this ok ?

BTW, its not a case of a client body, i’m talking about GET requests
also
that get crashed, not POST.

Posted at Nginx Forum:

more info: when i use “ignore client abort = on” , the crash happens
when
the client aborts the connection, BEFORE my subrequest handler is
called, so
its unlikely this code causes the crash.
also, i send the subrequest to a configured url named “aaa_post/” which
uses
the proxy module to send it to other server.

Posted at Nginx Forum:

Ok, i’ll attach my calling to subrequest code, its working flawlessly
except
the case i reported here:
//------------------------------------------------------------------
/*
Note: the purspose of this code is to call a handler module (at rewrite
phase), send special POST subrequest to another server (independant of
the
main request), wait with the module untill subrequest finishes, process
its
data , then continue to backend or next handler module
*/

// the subrequest will call this handler after it finishes
ngx_int_t ngx_aaa_post_subrequest_handler (ngx_http_request_t *r, void
*data, ngx_int_t rc)
{
ngx_aaa_ctx_t ctx = (ngx_aaa_ctx_t)data;
ngx_chain_t *bufs;
ngx_uint_t status;

if (rc != NGX_OK)
{
NGX_aaa_LOG_ERROR(“bad response (nginx code %d)”,rc);
ctx->post.error = 1;
aaa_SUB_PROF_END
ngx_http_core_run_phases(r->main); // continue main request
return NGX_OK; //cannot return rc if != NGX_OK - see below
}

if (r->upstream) // when sending to another server, then subrequest is
passed on upstream module
{
bufs = r->upstream->out_bufs;
status = r->upstream->state->status;
}
else // runs on the same nginx, another port
{
NGX_aaa_LOG_ERROR(“response could not get by ‘upstream’ method.
aborting”);
ctx->post.error = 1;
aaa_SUB_PROF_END
ngx_http_core_run_phases(r->main);
return NGX_OK;
}

if (status != NGX_HTTP_OK) // == 200 OK
{
NGX_aaa_LOG_ERROR(“bad response status (%d)”,status);
ctx->post.error = 1;
aaa_SUB_PROF_END
ngx_http_core_run_phases(r->main);
return NGX_OK; // when returning error in subrequest, the nginx
loops over
it untill ok, or after 2 loops its stucks the main req.
}

ctx->post.done = 1;

ctx->post.response_data = ngx_aaa_utils_get_data_from_bufs(r, bufs);

ctx->post.response_handler(r, data); // passing 

ctx->post->response_data
to ngx_aaa_response_handler() - data parsing

if (!ctx->post.response_data)
ngx_http_core_run_phases(r->main);

if (!ctx->standalone)
  ngx_http_core_run_phases(r->main); // release main request from 

its
wait and send it to the backend server

return NGX_OK;

}

// main code of calling to subrequest
ngx_int_t ngx_aaa_send_post_subrequest(ngx_http_request_t *r,
ngx_aaa_ctx_t
*ctx, char *_uri, ngx_str_t *data, ngx_uint_t is_ret)
{
ngx_http_request_t *sr;
ngx_uint_t flags = 0;
ngx_http_post_subrequest_t *psr;
ngx_str_t uri;
ngx_int_t res;
ngx_buf_t *buf;

flags = NGX_HTTP_SUBREQUEST_IN_MEMORY;

uri.data = (u_char*)_uri;
uri.len = strlen(_uri);

psr = ngx_palloc(r->pool, sizeof(ngx_http_post_subrequest_t));
if (!psr)
    return NGX_HTTP_INTERNAL_SERVER_ERROR;

ctx->done = 0;
ctx->post.done = 0;
ctx->post.start = 1;

if (is_ret) // return answer to caller, async
{
  psr->handler = ngx_aaa_post_subrequest_handler; // register 

callback
function for returning ans from the other end
psr->data = ctx;
}
else
psr = NULL;

// this func only registers the subreq in queue, but not activates it
yet
// note: sr->request_body is nulled during this func, alloc later
res = ngx_http_subrequest(r, &uri, NULL , &sr, psr, flags);
if (res)
return NGX_HTTP_INTERNAL_SERVER_ERROR;

ngx_memzero(&sr->headers_in, sizeof(sr->headers_in));

buf = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
if (!buf)
return NGX_ERROR;

// args is an ngx_str_t with the body
sr->method = NGX_HTTP_POST;

ngx_memcpy(&(sr->method_name), &ngx_aaa_post_method_name,
sizeof(ngx_str_t));

buf->temporary = 1;

buf->pos = data->data;
buf->last = buf->pos + data->len;

// do not inherit rb from parent
sr->request_body = ngx_palloc(r->pool,
sizeof(ngx_http_request_body_t));
NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body)

// note: always alloc bufs even if ptr is lid - since its garbage from
former request ! (caused seg fault in mod_proxy !)
sr->request_body->bufs = ngx_alloc_chain_link(r->pool);
NGX_aaa_CHECK_ALLOC_AND_RETURN(sr->request_body->bufs)

// post body - re-populate , do not inherit from parent
sr->request_body->bufs->buf = buf;
sr->request_body->bufs->next = NULL;
sr->request_body->buf = buf;

sr->header_in = NULL;
buf->last_in_chain = 1;
buf->last_buf = 1;

sr->request_body_in_single_buf = 1;

sr->headers_in.content_length_n = ngx_buf_size(buf);

ngx_str_t c_len_key = ngx_string(“Content-Length”);
ngx_str_t c_len_l;
char len_str[20];
sprintf(len_str, “%lu”, ngx_buf_size(buf));
c_len_l.data = (u_char*)len_str;
c_len_l.len = strlen(len_str);

ngx_aaa_set_input_header(sr, &sr->headers_in.content_length,
&c_len_key,
&c_len_l);

ngx_str_t key, l;

ngx_str_set(&key,“Content-Type”);
ngx_str_set(&l, “application/x-www-form-urlencoded”);
ngx_aaa_set_input_header(sr, &sr->headers_in.content_type, &key, &l);

return NGX_OK;
}

// handler module main function - calls the subrequest, waits for it to
finish
ngx_int_t ngx_aaa_handler(ngx_http_request_t *r)
{

// pseudo code: alloc module ctx - only once

if (ctx->post.start)
{
// check if post subrequest has ended - then call next module 

handler
if (ctx->post.done)
{
return NGX_DECLINED; // declined - if hdl is reg. in rewrite phase
}
else // wait for post subrequest to finish unless error
{
if (ctx->post.error)
{
return NGX_DECLINED; // subrequest finished - call next handler
module
}
else
{
return NGX_AGAIN; // wait untill finish response to our
subrequest
}
}

}

// prepare subrequest
// ngx_str - post body for the subrequest
ctx->post.response_handler = ngx_aaa_response_handler; // for
subrequest
response data parsing

rc = ngx_aaa_send_post_subrequest(r, ctx, url, ngx_str, 1);

if (rc != NGX_OK)
{
  NGX_aaa_LOG_ERROR("ngx_aaa_send_post_subrequest failed (error

%d)",rc);
return NGX_DECLINED;
}

/* NGX_DECLINED == pass to next handler, do not wait.
 * NGX_OK == wait for subrequest to finish first (non blocking, of

course)
*/

return NGX_OK;

}
//------------------------------------------------------------------

i’de appreciate your help,
BTW, is there any “nginx subrequest coding guide” documentation
available ?
its very confusing and lacks much info on the web, i got it working only
thru alot of trial-and-error.
Tnx
Gad

Posted at Nginx Forum:

thanks Maxim ! i very appreciate your help on this.
about the temp file - i protect from a response to be written to a file
by
knowing the max size that can be sent by the server and enlarging the
proxy
buffers accordingly.
i know i ruin the original request header - its the main purpose for my
code
! i want to issue an independant subrequest to another server, no to to
the
original. but the r->main->… is not ruined and acting ok afterwards.
in any case, i ask you to support this subrequest mechasnim, its
obviously
needed to send a subrequest to any server, not just to the original one,
and
also to control its response instead of just adding it to the start/end
of
page, its alot more flexible.
can i use another mechanism in order to achive those goals ? to create a
new
upstream module ?
tnx
Gad

Posted at Nginx Forum:

Hello!

On Mon, Mar 18, 2013 at 01:40:24AM -0400, gadh wrote:

thanks Maxim ! i very appreciate your help on this.
about the temp file - i protect from a response to be written to a file by
knowing the max size that can be sent by the server and enlarging the proxy
buffers accordingly.

You are not initializing subrequest’s request_body->temp_file
pointer (among other request_body members). It might point
anywhere, and will cause problems.

i know i ruin the original request header - its the main purpose for my code
! i want to issue an independant subrequest to another server, no to to the
original. but the r->main->… is not ruined and acting ok afterwards.

Yes, indeed.

Note though, that by changing headers_in structure you are
responsible for it’s consistency. It’s usually much better idea
to use upstream functionality to create needed request to an
upstream instead (proxy_set_body, proxy_pass_headers and so on).

in any case, i ask you to support this subrequest mechasnim, its obviously
needed to send a subrequest to any server, not just to the original one, and
also to control its response instead of just adding it to the start/end of
page, its alot more flexible.
can i use another mechanism in order to achive those goals ? to create a new
upstream module ?

What is supported is subrequest in memory functionality, which
allows you to get the response in memory instead of appending it
to the response. It only works with certain upstream protocols
though. And it wasn’t supposed to work at arbitrary request
processing phases, so it might be non-trivial to do things
properly, in particular - ensure subrequest consistency at early
phases of request processing and to rerun the main request once
subrequest is complete.


Maxim D.
http://nginx.org/en/donation.html

Note though, that by changing headers_in structure you are
responsible for it’s consistency. It’s usually much better idea
to use upstream functionality to create needed request to an
upstream instead (proxy_set_body, proxy_pass_headers and so on).

but can i wait for the upstream to return and delay the request from
passing
on to backend as i do in my subrequest ?
when i use your suggested proxy directives i have no control on that

Gad

Posted at Nginx Forum:

i changed to pcalloc as you told me and the crash seems to be solved !!
thanks alot
Gad

Posted at Nginx Forum:

Hello!

On Sun, Mar 17, 2013 at 05:47:24AM -0400, gadh wrote:

Below just couple of comments. Outlined problems are enough to
cause arbitrary segmentation faults, and I haven’t looked for
more.

[…]

ngx_memzero(&sr->headers_in, sizeof(sr->headers_in));

Note: this ruins original request headers. It’s enough to cause
anything.

[…]

sr->request_body->bufs->buf = buf;
sr->request_body->bufs->next = NULL;
sr->request_body->buf = buf;

Note: you allocate request body structure and only initialize some
of it’s members. E.g. sr->request_body->temp_file remains
uninitialized and will likely be dereferenced, resulting in
segmentation fault.

You have to at least change ngx_palloc() to ngx_pcalloc().

[…]

BTW, is there any “nginx subrequest coding guide” documentation available ?
its very confusing and lacks much info on the web, i got it working only
thru alot of trial-and-error.

Subrequests are dead simple in it’s supported form: you just call
ngx_http_subrequest() in a body filter, and the result is added to
the output at the appropriate point. Good sample is available in
ngx_http_addition_filter_module.c.

What you try to do with subrequests isn’t really supported (the
fact that it works - is actually a side effect of subrequests
processing rewrite in 0.7.25), hence no guides.


Maxim D.
http://nginx.org/en/donation.html

after a few addons to the code - in totally irrelevant places - the
error
returns so it did not help.

Now i try to create a new upstream handler so i can use it instead of
the
subrequest model.

i described the model i work in the first post above. let me add this:
in my first tests of the upstream - i cannot get to the backend server
at
all - i just get the upstream response and nginx pass it directly to my
filters and then to the client. my needs are different - after i recv
the
upstream response, i need it to go to the backend and then i’m going to
inject some of the data i recv from the upstream - to the backend
response
and only then to the client.

can you tell me if creating a new upstream module (my examples are
proxy/memcached) can suit my needs here ?
Tnx alot
Gad

Posted at Nginx Forum: