Gzip compression and Transfer-Encoding

Hi

We use the following settings for gzip compression

gzip  on;
gzip_min_length  1100;
gzip_comp_level  5;
gzip_buffers     4 16k;
gzip_types       text/html text/plain text/xml text/css

application/x-javascript application/atom+xml;

Which leads to good compression but regardless of the size of the
document always requires chunked encoding to send the resulting data.
For larger documents this doesn’t matter, but for smaller documents I
would like to provide a Content-Length header and avoid chunked
encoding.

Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.

Kind Regards

Dave C.

On Mon, Oct 22, 2007 at 05:53:22PM +1000, Dave C. wrote:

document always requires chunked encoding to send the resulting data.
For larger documents this doesn’t matter, but for smaller documents I
would like to provide a Content-Length header and avoid chunked
encoding.

What problem with chunked encoding ?

Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.

The problem is that gzip is filter: the header sent to client before
compression even starts. However, it is possible to postpone header
processing to know the compressed size.

Hi Igor,

Thanks for your reply

What problem with chunked encoding ?

No real problem with well behaved browsers, but I have heard that
older versions of IE can’t pipeline chunked requests (I may be
working from old information)

Is their a buffer setting that specifies the size of the initial
buffer to be compressed, the idea being that if the whole response
body fits in that one buffer, it can be compressed in one go, and the
resulting content length discovered.

The problem is that gzip is filter: the header sent to client before
compression even starts. However, it is possible to postpone header
processing to know the compressed size.

The way lighttpd does it is to have a small buffer, 8k by default
where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).

Cheers

Dave

The way lighttpd does it is to have a small buffer, 8k by default
where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).

Do you mean modern lighty gzipping filter ?

Yup.

http://trac.lighttpd.net/trac/wiki/Mod_Deflate

Cheers

Dave

On Mon, Oct 22, 2007 at 08:52:27PM +1000, Dave C. wrote:

Thanks for your reply

What problem with chunked encoding ?

No real problem with well behaved browsers, but I have heard that
older versions of IE can’t pipeline chunked requests (I may be
working from old information)

What do you mean by “pipeline” ? MSIE still does not support pipelined
requests.

I used chunked gzipped response since 2001 year in my Apache module
mod_deflate I did not heard about this problems.

where the compressed representation is sent. If the compressed body
fits in this buffer completely then the content-length is known,
otherwise chunked encoding is activated and compression continues in
8k blocks (i think).

Do you mean modern lighty gzipping filter ?