Buffer chain and NGX_AGAIN

Hi.

I have another question about buffer chains and NGX_AGAIN.

Suppose that ngx_http_output_filter returns NGX_AGAIN and I set the
events to be notified when I can send further data.

When my handler is called again, has the previous buffer been sent to
the client or copied in one of the output filter chains?

This is important because in mod_wsgi the buffer points to a Python
object and I must know when I can safely deallocate it.

In the current implementation I send the client body “at once”, setting
a cleanup handler so that a can free the Python object when the request
terminates.

Thanks Manlio P.

Manlio P. ha scritto:

This is important because in mod_wsgi the buffer points to a Python
object and I must know when I can safely deallocate it.

I have done some tests, and, unfortunately, when the write handler is
called (after a NGX_AGAIN), the previous buffer is still in use, so I
can’t free it.

Here is the code I use:

static void
ngx_http_wsgi_iterator_handler(ngx_http_request_t *r) {
ngx_http_core_loc_conf_t *clcf;

ngx_event_t *wev;
ngx_int_t rc;
ngx_chain_t out;
ngx_buf_t *b;
u_char *result;
ngx_uint_t len;
ngx_http_wsgi_ctx_t *ctx;

PyObject *item = NULL;

clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);

wev = r->connection->write;
ctx = ngx_http_get_module_ctx(r, ngx_http_wsgi_module);

// …

/* Free the previous item, if any /
/
XXX check me */
//Py_XDECREF(ctx->last_item); <=== the memory is still in use

if (ctx->waiting) {
ctx->waiting = 0;
if (wev->timedout) {
r->connection->timedout = 1;
ngx_http_wsgi_finalize_request(r, NGX_HTTP_REQUEST_TIME_OUT);
return;
}
}

// … obtain the next item from the Python iterator …

len = PyString_GET_SIZE(item);
result = (u_char *) PyString_AsString(item);

out.buf = b;
out.next = NULL;

// …

b->pos = result;
b->last = result + len;

b->memory = 1;
b->flush = 1;

rc = ngx_http_output_filter(r, &out);
switch (rc) {
case NGX_OK:
/*
* Ok, the entire buffer has been sent to the client or copied in
* one of the output filter buffers.
*
* We can free the item and continue the iteration.
*
/
Py_DECREF(item);
ctx->last_item = NULL;
return ngx_http_wsgi_iterator_handler®;
case NGX_AGAIN:
/

* The buffer can’t be send to the client right now.
*
* Save the item on the context, so that we can free it on the next
* iteration and setup the events so that we can continue the
* iteration when we can send the buffer.
*
*/
ctx->last_item = item;
ctx->waiting = 1;

 ngx_add_timer(wev, clcf->send_timeout);
 r->write_event_handler = ngx_http_wsgi_iterator_handler;
 if (ngx_handle_write_event(wev, 0) == NGX_ERROR) {
   ngx_http_wsgi_finalize_request(r, 

NGX_HTTP_INTERNAL_SERVER_ERROR);
return;
}

 return;

default:
ngx_http_wsgi_finalize_request(r, rc);
return;
}

If I do not free the “last” used Python object, then all seems to work
well.

P.S.
A performance note.
Serving an mp3, of size 3831150 bytes,
with worker_process = 2 (on a dual core)

  • nginx: 200 requests/seconds
  • mod_wsgi with a bufsize of 4096 bytes: 35 requests/seconds
  • mod_wsgi with a bufsize of 8192 bytes: 53 requests/seconds
  • mod_wsgi with a bufsize of 16384 bytes: 81 requests/seconds
  • mod_wsgi with a bufsize of 40960 bytes: 100 requests/seconds

Regards Manlio P.

Manlio P. ha scritto:

This is important because in mod_wsgi the buffer points to a Python
object and I must know when I can safely deallocate it.

I have done some tests, and, unfortunately, when the write handler is
called (after a NGX_AGAIN), the previous buffer is still in use, so I
can’t free it.

I think I have found a good solution, but I would like a confirmation.

Instead to store the last used Python object buffer in the context, I
will store a list.

When ngx_http_output_filter returns NGX_AGAIN, I push the Python object
in the list.
When ngx_http_output_filter return NGX_OK, I pop a Python object from
the list and free it.

To be sure to free all the objects, I will add a ngx_pool_cleanup_t
handler that will free all the remaing items (assuming that the HTTP
context for mod_wsgi is still “in scope”).

[…]

Thanks and regards Manlio P.