Upstream with multiple request reply

Hi all,

I’m trying to write an upstream which have to do multiple request /
reply
with backend, for one frontend request.

Currently,

  • I’m writing first request to backend in create_request handler, write
    data into r->upstream->request_bufs, and returning NGX_OK
  • Nginx call me on process_header. I can read data from
    r->upstream->buffer. I can loop on process_header if I return NGX_AGAIN.
    But I’m not able to send more data to backend. I try to add buffer in
    r->upstream->request_bufs, in r->upstream->request_bufs->next, nothing
    work.

Anybody know how to send data to backend in the process_header callback
?

Regards,

Bertrand

Hello!

On Mon, Jan 16, 2012 at 09:40:26PM +0100, Bertrand P. wrote:

But I’m not able to send more data to backend. I try to add buffer in
r->upstream->request_bufs, in r->upstream->request_bufs->next, nothing work.

Anybody know how to send data to backend in the process_header callback ?

The upstream module is designed to handle “single request - single
response” model, it’s not capable of sending multiple requests to
backend.

Maxim D.

Hi,

Arg, it’s not a good news :slight_smile:

Do you think I can do ngx_http_subrequest to do some additional request
to
back end ?
How to say to nginx to wait for subrequest before calling create_request
?

Regards,

Bertrand

Hello!

On Mon, Jan 16, 2012 at 10:45:58PM +0100, Bertrand P. wrote:

Hi,

Arg, it’s not a good news :slight_smile:

Do you think I can do ngx_http_subrequest to do some additional request to
back end ?
How to say to nginx to wait for subrequest before calling create_request ?

If you are ok with independent requests to backend (that is, you
just need to requests two happen, but don’t need to implement some
complex protocol), than using subrequests is right way to go.

See e.g. addition filter module sources for simple subrequest
usage example, or ssi module sources for more complex one.

Maxim D.

On Mon, Jan 16, 2012 at 09:40:26PM +0100, Bertrand P. wrote:

r->upstream->buffer. I can loop on process_header if I return NGX_AGAIN.
Maxim D.


nginx mailing list
[email protected]
nginx Info Page

Hi,

This demo maybe helpful.

Posted at Nginx Forum:

Hi all,

Finally, I do not use subrequest, I write the following code, inspired
from
upstream module.
I call ngx_http_upstream_send_another_request from process_header. It’s
working fine.

Feel free to comment my code, if you think I can have problem, memory
allocation, segfault or others.

Regards,

Bertrand

static void
ngx_http_upstream_send_another_request_dummy_handler(ngx_http_request_t
*r,
ngx_http_upstream_t *u)
{
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
“http upstream send another request dummy handler”);
}

static ngx_int_t
ngx_http_upstream_send_another_request(ngx_http_request_t *r,
ngx_http_upstream_t *u);

static void
ngx_http_upstream_send_another_request_handler(ngx_http_request_t *r,
ngx_http_upstream_t *u)
{
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
“http upstream send another request handler”);

ngx_http_upstream_send_another_request(r, u);

}

static ngx_int_t
ngx_http_upstream_send_another_request(ngx_http_request_t *r,
ngx_http_upstream_t *u)
{
ngx_int_t rc;
ngx_connection_t *c;

c = u->peer.connection;

ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
               "http upstream send another request");
//
// if (!u->request_sent && ngx_http_upstream_test_connect(c) != 

NGX_OK)
{
// ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
// return;
// }

c->log->action = "sending request to upstream";

rc = ngx_output_chain(&u->output, u->request_sent ? NULL :

u->request_bufs);

u->request_sent = 1;

if (rc == NGX_ERROR) {
    return rc;
}

if (c->write->timer_set) {
    ngx_del_timer(c->write);
}

if (rc == NGX_AGAIN) {
   ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
                   "ngx_output_chain return NGX_AGAIN");

    u->write_event_handler =

ngx_http_upstream_send_another_request_handler;

    ngx_add_timer(c->write, u->conf->send_timeout);

    if (ngx_handle_write_event(c->write, u->conf->send_lowat) !=

NGX_OK) {
return NGX_ERROR;
}

    return NGX_AGAIN;
}

/* rc == NGX_OK */

if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) {
    if (ngx_tcp_push(c->fd) == NGX_ERROR) {
        ngx_log_error(NGX_LOG_CRIT, c->log, ngx_socket_errno,
                      ngx_tcp_push_n " failed");
        return NGX_ERROR;
    }

    c->tcp_nopush = NGX_TCP_NOPUSH_UNSET;
}

ngx_add_timer(c->read, u->conf->read_timeout);

// #if 1
// if (c->read->ready) {
//
// /* post aio operation /
//
// /

// * TODO comment
// * although we can post aio operation just in the end
// * of ngx_http_upstream_connect() CHECK IT !!!
// * it’s better to do here because we postpone header buffer
allocation
// */
//
// return u->process_header®;
// }
// #endif

u->write_event_handler =

ngx_http_upstream_send_another_request_dummy_handler;

if (ngx_handle_write_event(c->write, 0) != NGX_OK) {
    return NGX_ERROR;
}

return NGX_AGAIN;

}