Non-buffered backend

Hi all,

First an easy question: what is does the postpone filter do? I can’t
find
it in the docs.

On the next questions:

My current understanding of the webserver to send the response goes
along
the following lines (I haven’t spent much time reading the source yet):

  1. nginx receives the stream to send from the back-end, be it a static
    file or one of the other back-end means and buffers this until this
    streamis closed
  2. nginx routes the stream through all the need modules
  3. nginx sends the stream to the client

I’ve a few questions regarding this:
a. Are 2 & 3 indeed seperate steps or will everything be sent as soon at
is processed in chunks (as I actually expect).
b. Is it possible to not transfer control to next module (with
ngx_http_next_*) but to request more chunks (maybe return with
NGX_AGAIN?)
c. What I actually would like to try (in a few weeks) is to skip the
buffering step between 1&2. I see 2 options for this: I. buffer a small
amount then call the modules and wait until I receive a request back for
more info. This has the disadvantage that the backend can not send it’s
response at the highest rate possible if the modules are not fast
enough);
II. Use 2 threads, the first to read from the backend and the second to
immediatly start feeding the stream into the modules. Any opinions about
these options and the possibility to implement this in nginx. Or maybe a
third possibility?

Best regards,

Martin S.

On Mon, Dec 03, 2007 at 09:27:11PM +0100, Martin S. wrote:

First an easy question: what is does the postpone filter do? I can’t find
it in the docs.

The postpone filter is internal filter. It is used to output responses
of
several/included subrequests that run in parallel.

On the next questions:

My current understanding of the webserver to send the response goes along
the following lines (I haven’t spent much time reading the source yet):

  1. nginx receives the stream to send from the back-end, be it a static
    file or one of the other back-end means and buffers this until this
    streamis closed

No, nginx buffers until at least one configured buffer will be filled.
A client may get SSIed, gzipped, chunked, and SSLed response even
backend does not pass whole data.

  1. nginx routes the stream through all the need modules
  2. nginx sends the stream to the client

I’ve a few questions regarding this:
a. Are 2 & 3 indeed seperate steps or will everything be sent as soon at
is processed in chunks (as I actually expect).

Yes.

b. Is it possible to not transfer control to next module (with
ngx_http_next_*) but to request more chunks (maybe return with NGX_AGAIN?)

Yes, but this is not enough. Writing filters is complex thing.

c. What I actually would like to try (in a few weeks) is to skip the
buffering step between 1&2. I see 2 options for this: I. buffer a small
amount then call the modules and wait until I receive a request back for
more info. This has the disadvantage that the backend can not send it’s
response at the highest rate possible if the modules are not fast enough);

It’s already implemented.

proxy_buffer 32; # 32 bytes
proxy_buffers 4 32;

Besides, there is

proxy_buffering off;

II. Use 2 threads, the first to read from the backend and the second to
immediatly start feeding the stream into the modules. Any opinions about
these options and the possibility to implement this in nginx. Or maybe a
third possibility?

No, now nginx is not threads-safe.

Besides, using threads in this way is bad idea. The single reasonable
threads usage is to improve disk i/o parallelism.

On Mon, Dec 03, 2007 at 09:27:11PM +0100, Martin S. wrote:

First an easy question: what is does the postpone filter do? I can’t
find
it in the docs.

The postpone filter is internal filter. It is used to output responses of
several/included subrequests that run in parallel.
Thanks.

On the next questions:

b. Is it possible to not transfer control to next module (with
ngx_http_next_*) but to request more chunks (maybe return with
NGX_AGAIN?)

Yes, but this is not enough. Writing filters is complex thing.
I’m sure it is. It was just my first impression of quickly reading some
parts of the code.

c. What I actually would like to try (in a few weeks) is to skip the
buffering step between 1&2. I see 2 options for this: I. buffer a small
amount then call the modules and wait until I receive a request back for
more info. This has the disadvantage that the backend can not send it’s
response at the highest rate possible if the modules are not fast
enough);

It’s already implemented.
Wonderful. I’ve got the impression it’s not yet supported by reading
http://wiki.codemongers.com/NginxHttpProxyModule, however reading the
tenth time you finally read it correctly: the request is buffered; which
is generally not a problem.

Thank you for th equick reply,

Regards,

Martin