hi all,
There are 2 modes of upstream, buffering and non-buffering. And there
are some difference between them:
- non-buffering mode doesn’t support limit-rate.
- a request in non-buffering mode decides the end of upstream by a)
close of upstream; b) comparing the length of recived data and
headers_out.content_length. While the request in buffering mode decides
the end only by the close of upstream. As a result, in buffering mode,
the upstream(such as a memcached cache) can’t be keepalive, which leads
the request in nginx to end after keepalive-time.
I want to know why does these difference exist?
Can’t the non-buffering mode support limit-rate? can’t the buffering
mode decide the end of request by content-length?
Thanks,
Wu
Hi,
On 12/12/2010 06:45, Wu Bingzheng wrote:
I want to know why does these difference exist?
Can’t the non-buffering mode support limit-rate?
The problem with this is that if you’re supporting limit rate, what
happens if you receive more data from the upstream than the limit rate
would allow you to send to the client? You’d have to buffer it (at
least partially, either in memory or on disk). 
can’t the buffering mode decide the end of request by content-length?
I personally can’t immediately see a reason why not (but I’d be
interested to know if there is one). It’s probably not there just
because in most cases the connections to the upstreams won’t fail, and
so it’s more efficient to not check the size when each packet is
received.
Cheers,
Marcus.
Hi Marcus,
Thanks for your reply, but I am still not clear.
At 2010-12-12 14:20:05Eugaia [email protected] wrote:
decides the end only by the close of upstream. As a result, in
buffering mode, the upstream(such as a memcached cache) can’t be
keepalive, which leads the request in nginx to end after keepalive-time.
I want to know why does these difference exist?
Can’t the non-buffering mode support limit-rate?
The problem with this is that if you’re supporting limit rate, what
happens if you receive more data from the upstream than the limit rate
would allow you to send to the client? You’d have to buffer it (at
least partially, either in memory or on disk). 
I don’t think your explanation is right. In non-buffering mode, even if
there is no limit-rate, the downstream connetion still maybe slower than
the upstream, such as because the downstream network situation is bad.
Now, if the downstream is slower, and the data buffer is full, nginx
will stop recieving data from upstream.
So I think, adding limit-rate into non-buffering mode will not bring any
problem about the data buffer you said. If the downstream is blocked by
limit-rate and the data buffer is full, just stop receiving data, just
like what it does now.
can’t the buffering mode decide the end of request by content-length?
I personally can’t immediately see a reason why not (but I’d be
interested to know if there is one). It’s probably not there just
because in most cases the connections to the upstreams won’t fail, and
so it’s more efficient to not check the size when each packet is received.
Just because of efficience? But it causes some inconvenience, such as
the upstream server can’t be configed as keepalive mode (if the upstream
is http proxy, it can be keepalive, because nginx dosen’t support
keepalive as a http client).
there is a good addon module, HttpUpstreamKeepaliveModule, which can
re-use memcached upstream connections. But it need the memcahced server
to be configed as keepalive mode. Which means, if we configed the
memcached upstream module as buffering mode(may because we need
limit-rate…), we can’t config the memcached server in keepalive mode,
so we can’t use HttpUpstreamKeepaliveModule.
Cheers,
Marcus.
Thanks very much,
Wu
Hi,
On 12/12/2010 15:40, Wu Bingzheng wrote:
In non-buffering mode, even if there is no limit-rate, the downstream connetion
still maybe slower than the upstream, such as because the downstream network
situation is bad. Now, if the downstream is slower, and the data buffer is full,
nginx will stop recieving data from upstream. So I think, adding limit-rate into
non-buffering mode will not bring any problem about the data buffer you said.
If the downstream is blocked by limit-rate and the data buffer is full, just
stop receiving data, just like what it does now.
Yes, stopping receiving is obviously another option. I agree that it
would probably work in a very similar way to slow downstreams now, and
wouldn’t have any additional problems - and I think it would be a good
idea to add it.
Just because of efficience?
I was merely speculating on Igor’s motivations. I don’t know. 
But it causes some inconvenience, such as the upstream server can’t be configed
as keepalive mode (if the upstream is http proxy, it can be keepalive, because
nginx dosen’t support keepalive as a http client).
there is a good addon module, HttpUpstreamKeepaliveModule, which can re-use
memcached upstream connections. But it need the memcahced server to be configed as
keepalive mode. Which means, if we configed the memcached upstream module as
buffering mode(may because we need limit-rate…), we can’t config the memcached
server in keepalive mode, so we can’t use HttpUpstreamKeepaliveModule.
Is it not possible to have memcached in both buffering an keepalive
mode?
Perhaps your options are either to write a patch for Nginx or use
something else (HAProxy / Trafficserver might be good alternatives).
Marcus.