Re: Question on Empty GIF module

IMHO, post_action will be much better here. Something like:
location / {
empty_gif;
post_action /post;
}

location = /post {
internal;
proxy_pass http://my_upstream_servers;
}

Thank you Denis and Maxim! Maxium can you elaborate on what the flow
would actually be like with the above scenario, and what advantages it
may have over the approach posted by Maxium?

I am trying to understand how the above actually works from the
browser’s perspective that originates the request…

For instance, with the above, will the browser’s request be fulfilled
by the first location blocked and then immediately closed (this is what
I am hoping for)? Or does the browser need to wait while nginx
executes the post_action?

I could not find any docs on post_action so I don’t know how it works or
what it does exactly. I came across this:
post_action does not block new connections, but it blocks current
connection.
nginx handles post_action in context of request and connection, so it
does not close connection to a client before going to post_action.So
again this makes me wonder whether nginx will immediately serve the gif
back to the browser and close that connection for the speed improvement
I am hoping for.

Assuming the above works how I would like, do I still need to return any
gif or other such response from my app? Or can I just close the
connection without upsetting nginx or making nginx think the backend is
down?

Thanks again!

  ____________________________________________________________________________________

Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

Hello!

On Thu, Apr 24, 2008 at 09:16:53AM -0700, Rt Ibmer wrote:

Thank you Denis and Maxim! Maxium can you elaborate on what the
flow would actually be like with the above scenario, and what
advantages it may have over the approach posted by Maxium?

You mean “posted by Denis”?

I am trying to understand how the above actually works from the
browser’s perspective that originates the request…

For instance, with the above, will the browser’s request be
fulfilled by the first location blocked and then immediately
closed (this is what I am hoping for)? Or does the browser need
to wait while nginx executes the post_action?

In sort: post_action registers callback that will be called after
request
will be processed by usual handlers (in this case - by empty_gif
module). It evaluates after response has been sent to client,
so client sees no delay in request processing.

The only disadvantage of post_action as of current implementation
is that it blocks connection, i.e. following keep-alive request
won’t be processed until post_action terminates.

Anyway, it’s much better for your task then using X-Accel-Redirect.

And of course really best way do things is to write logs and just
process them as needed.

I could not find any docs on post_action so I don’t know how it
works or what it does exactly. I came across this:

Docs for post_action doesn’t exist (yet). There are some relevant
russian mailing list posts, but I can’t recall anything in
English.

post_action does not block new connections, but it blocks
current connection.
nginx handles post_action in context of request and connection,
so it
does not close connection to a client before going to
post_action. So again this makes me wonder whether nginx will
immediately serve the gif back to the browser and close that
connection for the speed improvement I am hoping for.

Closing connection isn’t really needed since empty_gif returns
response with Content-Length set, and even HTTP/1.0 browsers will
see response before connection will be closed. This will block
subsequent requests within keep-alive connection though, see
above.

Assuming the above works how I would like, do I still need to
return any gif or other such response from my app? Or can I just
close the connection without upsetting nginx or making nginx think
the backend is down?

You have to return any valid response. Simple “204 No content”
will do. Alternatively, you may tune nginx to ignore
errors from backend server, but I don’t recommend this way.

Maxim D.