Which leads to good compression but regardless of the size of the
document always requires chunked encoding to send the resulting data.
For larger documents this doesn’t matter, but for smaller documents I
would like to provide a Content-Length header and avoid chunked
encoding.
Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.
On Mon, Oct 22, 2007 at 05:53:22PM +1000, Dave C. wrote:
document always requires chunked encoding to send the resulting data.
For larger documents this doesn’t matter, but for smaller documents I
would like to provide a Content-Length header and avoid chunked
encoding.
What problem with chunked encoding ?
Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.
The problem is that gzip is filter: the header sent to client before
compression even starts. However, it is possible to postpone header
processing to know the compressed size.
No real problem with well behaved browsers, but I have heard that
older versions of IE can’t pipeline chunked requests (I may be
working from old information)
Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.
The problem is that gzip is filter: the header sent to client before
compression even starts. However, it is possible to postpone header
processing to know the compressed size.
The way lighttpd does it is to have a small buffer, 8k by default
where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).
The way lighttpd does it is to have a small buffer, 8k by default
where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).
On Mon, Oct 22, 2007 at 08:52:27PM +1000, Dave C. wrote:
Thanks for your reply
What problem with chunked encoding ?
No real problem with well behaved browsers, but I have heard that
older versions of IE can’t pipeline chunked requests (I may be
working from old information)
What do you mean by “pipeline” ? MSIE still does not support pipelined
requests.
I used chunked gzipped response since 2001 year in my Apache module
mod_deflate I did not heard about this problems.
where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).
Do you mean modern lighty gzipping filter ?
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.