Gzip - unexplained side effects

Hi,

I’m running nginx 1.0.0 in front of a FCGI backend. We’ve been running
in production for about 4 months, and have really been impressed with
the performance and stability of nginx.

We run a medium-volume appliction: 1000 to 4000 requests/sec spread over
2 instances by an upstream round robin load balancer. We use
keepalives, but keep them rather short given the request volume to keep
the number of open connections manageable.

Recently, I was working to improve our gzip settings. The largest
change was that I added an explicit gzip buffer line (find my config
below - the three lines that are commented out seem to be correlated
with this issue)

After the change, the volume of outbound traffic decreased measurably,
as if gzip was not originally doing much due to inadequate buffer space.
Great news!, or so I thought. What also changed was that the number of
active connections fell by about 75% - the gzip change was somehow
causing keepalives to be closed prematurely. Also, our volume of
incoming requests decreased a bit: as if some requests were being
aborted (though accepts == handled). The request volume makes
debugging this particular issue somewhat troublesome, since I have yet
to replicate it in a quiet instance.

The guts of my configuration appear below. This is such an unexpected
issue that I’m not doing a great job of setting up my question well.
What I think I’d like to know is how could a change to the gzip buffers
(or the other two commented changes) impact keepalives or overall
connection negotiation? Also, any suggestions as to how to go about
debugging it?

sendfile        off;
tcp_nodelay on;
ignore_invalid_headers  on;
if_modified_since off;

gzip on;
gzip_comp_level 9;
gzip_types text/javascript text/plain application/x-javascript;
gzip_disable "MSIE [1-6]\.(?!.*SV1)"
#gzip_buffers 512 4k;
#gzip_min_length  1100;  #if it fits in one packet, no worries
#gzip_http_version 1.1;

keepalive_timeout  6;
keepalive_requests 4;

Cheers,

Dean

Posted at Nginx Forum:

On Nov 8, 2011, at 9:14 PM, dbanks wrote:

gzip_comp_level 9;

Not really related to your question, but this uses much more cpu than
the default for very little gain. We always run this at “1” Unless you
have some special requirements (with benchmarks), I’d do the same.

–Brian

On Tue, Nov 08, 2011 at 09:14:04PM -0500, dbanks wrote:

incoming requests decreased a bit: as if some requests were being

#gzip_min_length  1100;  #if it fits in one packet, no worries
#gzip_http_version 1.1;

keepalive_timeout  6;
keepalive_requests 4;

If you comment gzip_buffers, you see the previous site state ?
What is typical uncompressed and compressed response size ?
The default gzip_buffers are “32 4k”, so they can keep up to 128K.

And as it was already suggested it’s better to use default
gzip_comp_level 1.


Igor S.

Hi Igor, thanks for your response!

If you comment gzip_buffers, you see the previous site state ?

If I comment gzip_buffers, gzip_min_length, and gzip_http_version, I see
the previous (desired) behavior.

What is typical uncompressed and compressed response size ?

Responses are 6.8k or smaller uncompressed, 2.6k compressed
(gzip_comp_level=9), 2.8k compressed (gzip_comp_level=1). About half of
the responses are this size, and the other half are less than 1k
uncompressed.

We currently have more than adequate CPU and want to minimize bandwidth
costs, so I had assumed that more compression was better. Is there
another reason that I should stay with the default gzip_comp_level=1?
(I’m happy to try it–just curious.)

Cheers,

Dean

Posted at Nginx Forum:

I spent more time testing this particular issue. I believe that what
appeared to be lost traffic is simply due to the shortened keepalives
and the load balancer favoring keepalive connections over new
connections. However, there still seems to be a link between gzip
settings and the number of open connections (keepalives).

gzip on;
gzip_comp_level 1;
gzip_types text/javascript text/plain application/x-javascript;
gzip_disable “MSIE [1-6].(?!.*SV1)”
#gzip_buffers 64 4k;
#gzip_min_length 1100; #if it fits in one packet, no worries
#gzip_http_version 1.1;

I have confirmed that if any one of the three config lines at the bottom
(the ones that are commented out) are present, the number of open
connections drops from ~7600 to ~1300. Removing that line from the
config and reloading restores normal operation. Strange.

Cheers,

Dean

Posted at Nginx Forum:

Hello!

On Fri, Nov 18, 2011 at 11:50:53AM -0500, dbanks wrote:

I spent more time testing this particular issue. I believe that what
appeared to be lost traffic is simply due to the shortened keepalives
and the load balancer favoring keepalive connections over new
connections. However, there still seems to be a link between gzip
settings and the number of open connections (keepalives).

gzip on;
gzip_comp_level 1;
gzip_types text/javascript text/plain application/x-javascript;
gzip_disable “MSIE [1-6].(?!.*SV1)”

It looks like you’ve missed “;” here.

This is probably a reason for all “strange” effects you observe -
missing “;” causes “gzip_disable” to eat next directive in config.

#gzip_buffers 64 4k;
#gzip_min_length 1100; #if it fits in one packet, no worries
#gzip_http_version 1.1;

I have confirmed that if any one of the three config lines at the bottom
(the ones that are commented out) are present, the number of open
connections drops from ~7600 to ~1300. Removing that line from the
config and reloading restores normal operation. Strange.

Maxim D.

Hello Maxim!

Mystery solved. I cannot believe that I missed that!

Cheers,

Dean

Posted at Nginx Forum: