On Thu, Oct 15, 2009 at 09:49:45AM -0400, piotreek wrote:
Many thanks for your comments!
Ad 1. You’re right. We changed the code style and now it should be consistent and similar to Nginx style.
Ad 2. This is interesting. We’ve thought about using an upstream, what is more, we’ve tried to do it. However, our module allows to download simultaneously a lot of files, when upstream allows to download only one per one request. Am I wrong? Is it possible to download simultaneously a few files using upstream?
Take a look at subrequest thing. Basically - take a look at SSI
module to see how it handles blocks.
When it comes to gethostbyname - ofc you’re right again, we will think about this. Any suggestions?
Take a look at the upstream module, it has an example of how
nginx’s async resolver may be used.
Ad 3. Right, corrected.
Ad 4. This is interesting too. Iâ€™ve changed code by adding just â€œr->count++;â€ and removing â€œr->buffered = 1;â€ and it seems to work. Independently of value returned from body_filter function. When it returns NGX_OK it works, when it returns NGX_DONE after last chain - it works too… What should be returned?
Not really sure. Also it’s quite possible that I’m wrong and from
filter it’s better to use r->buffered (though it’s limited to
stock modules as it’s bitmask… but SSI uses it for quite a
similar tasks… it looks for me that this is obsolete aproach
though) or r->blocked (introduced together with r->count,
AIO uses it).
Probably it’s a good idea to ask Igor…
Furthermore, is it all about r->count? Just increment it and forget? Only way to learn that mechanism is reading source code of Nginx? Is it described anywhere? Weâ€™ve tried to look for information about asynchronous work of filters, but we havenâ€™t found anything except existing modules and their source code.
Basically - you increment it, and call to
ngx_http_finalize_request() decrements it.
And yes, documentation is written in C language.
New version of code with r->count is available at sourceforge SVN: XXSLT - Nginx module for XSLT processing download | SourceForge.net
- One more thing â€“ collecting data from incoming buffer chains. Iâ€™ve seen in other modules (for example in xslt) strange way to collect all buffers and process them after getting the last one. You have to copy the content, set in->buf->pos = in->buf->last (that isnâ€™t intuitive ;)) and return NGX_OK from body_filter for each buffer chain. Is it only way to do it?
This has an important disadvantage â€“ we canâ€™t keep pointers to that buffers, because their content is changed by nginx later. Weâ€™ve tried to set some flags in buffers with no interesting result.
We would like to collect pointers to buffers, donâ€™t copy their content and in some cases â€“ return original, unmodified buffer chains after obtaining the last buffer chain (= after reading whole incoming data). Can we do it without copying all of them?
By setting buf->pos = buf->last you signal upstream modules that
data in the buffer was already processed and it may be reused. If
you don’t do this - upstream modules will eventually run out of
free buffers and stuck.
Basically, you have to process buffers as soon as they arrive, and
if you need some data later - copy it.