Fire and forget requests

Hi,

We have a very specific use case and are trying to find a solution for
it.
We started looking at nginx as a possibility for handling this use case
as
we already use nginx for some of our other webserver duties. I’ve done
some
testing and investigation but it doesn’t seem like we can use nginx to
do
what we want. However I thought I’d check with the community before
dismissing it completely.

What we want is a fire and forget solution for request handling, where
we
can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the
original
request to be handled at whatever speed the backend is capable of. We
don’t
care about the response from the backend server; this can simply be
dropped
once it’s received.

Is something like the above possible? I did some testing by setting
nginx up
as a load balancer, pointing at a backend web server, and using the
“return”
directive before the proxy_pass directive but the return directive
simply
stops further execution of the request.

Any advice would be welcome.

Thanks,
Guy

On 8/31/11, Guy K. [email protected] wrote:

What we want is a fire and forget solution for request handling, where we
can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the original
request to be handled at whatever speed the backend is capable of. We don’t
care about the response from the backend server; this can simply be dropped
once it’s received.

Is something like the above possible?

I think post_action can help here.
Basically you can respond with a simple static page and set
post_action to forward request to another location afterwards, which
in turn can pass request to the upstream.

I don’t believe nginx can do this since I think it’stoo linear in the
way it processes the request
The way I would approach this is by using the httpd built into C# -
HttpListener
Once you receive the initial request you can send 200 immediately then
spawn a new thread with the request to your backend using HttpWebRequest
Just some ideas… this is potentially very easy using C# - and I’m sure
in quite a few other lanquages/scripts too

From: [email protected] [mailto:[email protected]] On Behalf
Of Guy K.
Sent: 30 August 2011 22:35
To: [email protected]
Subject: Fire and forget requests

Hi,

We have a very specific use case and are trying to find a solution for
it. We started looking at nginx as a possibility for handling this use
case as we already use nginx for some of our other webserver duties.
I’ve done some testing and investigation but it doesn’t seem like we can
use nginx to do what we want. However I thought I’d check with the
community before dismissing it completely.

What we want is a fire and forget solution for request handling, where
we can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the
original request to be handled at whatever speed the backend is capable
of. We don’t care about the response from the backend server; this can
simply be dropped once it’s received.

Is something like the above possible? I did some testing by setting
nginx up as a load balancer, pointing at a backend web server, and using
the “return” directive before the proxy_pass directive but the return
directive simply stops further execution of the request.

Any advice would be welcome.

Thanks,
Guy

This could be implemented by submitting background jobs to Gearman:

http://gearman.org/

Guy K. wrote:

we can set up nginx to receive a request from our web servers, pass this

Any advice would be welcome.


Best regards,
Valery K.

We did notice similar functionality with PHP FPM

http://php.net/manual/en/install.fpm.php

fastcgi_finish_request() - special function to finish request and
flush
all data while continuing to do something time-consuming (video
converting,
stats processing etc.);

Alternatively, and the approach we’re using is via Gearman message bus
style
system.

On Wed, Aug 31, 2011 at 7:27 AM, Richard K.
<[email protected]

From: Guy K. [email protected]

What we want is a fire and forget solution for request handling, where we can set
up nginx to receive a request from our web servers, pass this request on to an
external HTTP service or an HTTP backend and send a 200 response back straight
away to the requesting machine, leaving the original request to be handled at
whatever speed the backend is capable of. We don’t care about the response from
the backend server; this can simply be dropped once it’s received.

What about using a simple fcgi script that returns 200 straight away and
just write the request to a file/pipe/database/…

and another script that just reads the file/pipe/database/… for new
requests and sends them to the backend?

JD