We have a very specific use case and are trying to find a solution for
it.
We started looking at nginx as a possibility for handling this use case
as
we already use nginx for some of our other webserver duties. I’ve done
some
testing and investigation but it doesn’t seem like we can use nginx to
do
what we want. However I thought I’d check with the community before
dismissing it completely.
What we want is a fire and forget solution for request handling, where
we
can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the
original
request to be handled at whatever speed the backend is capable of. We
don’t
care about the response from the backend server; this can simply be
dropped
once it’s received.
Is something like the above possible? I did some testing by setting
nginx up
as a load balancer, pointing at a backend web server, and using the
“return”
directive before the proxy_pass directive but the return directive
simply
stops further execution of the request.
What we want is a fire and forget solution for request handling, where we
can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the original
request to be handled at whatever speed the backend is capable of. We don’t
care about the response from the backend server; this can simply be dropped
once it’s received.
Is something like the above possible?
I think post_action can help here.
Basically you can respond with a simple static page and set
post_action to forward request to another location afterwards, which
in turn can pass request to the upstream.
I don’t believe nginx can do this since I think it’stoo linear in the
way it processes the request
The way I would approach this is by using the httpd built into C# -
HttpListener
Once you receive the initial request you can send 200 immediately then
spawn a new thread with the request to your backend using HttpWebRequest
Just some ideas… this is potentially very easy using C# - and I’m sure
in quite a few other lanquages/scripts too
We have a very specific use case and are trying to find a solution for
it. We started looking at nginx as a possibility for handling this use
case as we already use nginx for some of our other webserver duties.
I’ve done some testing and investigation but it doesn’t seem like we can
use nginx to do what we want. However I thought I’d check with the
community before dismissing it completely.
What we want is a fire and forget solution for request handling, where
we can set up nginx to receive a request from our web servers, pass this
request on to an external HTTP service or an HTTP backend and send a 200
response back straight away to the requesting machine, leaving the
original request to be handled at whatever speed the backend is capable
of. We don’t care about the response from the backend server; this can
simply be dropped once it’s received.
Is something like the above possible? I did some testing by setting
nginx up as a load balancer, pointing at a backend web server, and using
the “return” directive before the proxy_pass directive but the return
directive simply stops further execution of the request.
fastcgi_finish_request() - special function to finish request and
flush
all data while continuing to do something time-consuming (video
converting,
stats processing etc.);
Alternatively, and the approach we’re using is via Gearman message bus
style
system.
What we want is a fire and forget solution for request handling, where we can set
up nginx to receive a request from our web servers, pass this request on to an
external HTTP service or an HTTP backend and send a 200 response back straight
away to the requesting machine, leaving the original request to be handled at
whatever speed the backend is capable of. We don’t care about the response from
the backend server; this can simply be dropped once it’s received.
What about using a simple fcgi script that returns 200 straight away and
just write the request to a file/pipe/database/…
and another script that just reads the file/pipe/database/… for new
requests and sends them to the backend?
JD
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.