Bug in supporting Quality zero (negate) parameter in Accept-Encoding

Spent the last day chasing down a corner case with Nginx and Level3
CDN. It turns out that some requests that they send have the
following Accept-Encoding header.

“Accept-Encoding: compress, gzip;q=0”

Based on this RFC
HTTP/1.1: Header Field Definitions this is a
valid way to say “NO gzip encoding”
Nginx ignores this and actually sends a gziped version. This causes
the CDN to error, as defined by the RFC.

Example

This should NOT return compressed data, as defined by the RFC

curl -I http://cs.pinkbike.org/127/sprt/c/top.css -H “Accept-Encoding:
gzip;q=0”

HTTP/1.1 200 OK
Server: nginx/1.0.0
Date: Fri, 29 Jul 2011 17:06:14 GMT
Content-Type: text/css
Last-Modified: Fri, 15 Jul 2011 20:13:20 GMT
Connection: close
Vary: Accept-Encoding
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Cache-Control: public
Content-Encoding: gzip

Is there a parameter/idea that I am missing or is this a bug?

Radek Burkat

On Fri, Jul 29, 2011 at 10:53:28AM -0700, Radek Burkat wrote:

the CDN to error, as defined by the RFC.
Content-Type: text/css
Last-Modified: Fri, 15 Jul 2011 20:13:20 GMT
Connection: close
Vary: Accept-Encoding
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Cache-Control: public
Content-Encoding: gzip

Is there a parameter/idea that I am missing or is this a bug?

Yes, this is nginx bug.
What really surprises me WHY they send this header at all ?


Igor S.

What really surprises me WHY they send this header at all ?

You would assume that a CDN would clean/normalize the requests to the
backend, and not use the Q qualifiers at all. But maybe in some cases
they just pass the accept-encoding header which they receive from the
client.

In this case I tracked it down as a large corporation behind a Microsoft
ISA firewall, which could not access our css/js. They were accessing
the cdn with the origin being our nginx servers. I can’t comment on the
choices that microsoft firewalls make in their request headers ( or the
companies that use them, or how they configure them ), but it appears
that they do follow the RFC in this case.

In order to guarantee correct delivery via nginx, is the only workaround
right now to disable compression on all our css/js content? Or can you
think of something else?

Thanks Igor.

Let me know if there is anything I can do to help.

Radek Burkat

Posted at Nginx Forum:

On 29 Jul 2011 20h11 WEST, [email protected] wrote:

can’t comment on the choices that microsoft firewalls make in their
request headers ( or the companies that use them, or how they
configure them ), but it appears that they do follow the RFC in this
case.

In order to guarantee correct delivery via nginx, is the only
workaround right now to disable compression on all our css/js
content? Or can you think of something else?

If you have their IP range you can try using the Geo module,
http://wiki.nginx.org/HttpGeoModule, like this:

geo $gzip_state {
default on;
xxx.yyy.zzz.uuu/nm off; # their IP range…those using ISA proxy
}

server {
(…)
gzip $gzip_state;
(…)
}

— appa

If you have their IP range you can try using the Geo module

Unfortunately we get millions of page views per day from all over the
world so I’m looking for a solution not just for this one client, but
all such cases. Good idea though.

I do get reports now and then, where someone does report the site
without css, but I have always dismissed it as user error. Maybe this
problem is more prevalent and unreported .

Some more info on this… so if you have a client that can actually
handle gzip, but requests no gzip, but receives gzip due to this bug,
maybe it can actually render the data correctly, since it actually does
have the capabilities. This may mean that if you are using nginx
directly to talk to such clients this bug may not be as visible.

In this particular CDN, they follow the rfc and actually error if the
origin sends a compressed version instead of plain, and pass that error
to the client. So using the CDN, makes this problem more visible.

Posted at Nginx Forum:

On Fri, Jul 29, 2011 at 08:23:34PM +0100, Antnio P. P. Almeida wrote:

Microsoft ISA firewall, which could not access our css/js. They
If you have their IP range you can try using the Geo module,
gzip $gzip_state;
(…)
}

The “gzip” directory does not currently support variables.


Igor S.

On Fri, Jul 29, 2011 at 03:11:28PM -0400, rburkat wrote:

choices that microsoft firewalls make in their request headers ( or the
companies that use them, or how they configure them ), but it appears
that they do follow the RFC in this case.

In order to guarantee correct delivery via nginx, is the only workaround
right now to disable compression on all our css/js content? Or can you
think of something else?

Thanks Igor.

Let me know if there is anything I can do to help.

The attached patch accounts quantity in Accept-Encondig.

On 29 Jul 2011 20h44 WEST, [email protected] wrote:

Some more info on this… so if you have a client that can actually
handle gzip, but requests no gzip, but receives gzip due to this
bug, maybe it can actually render the data correctly, since it
actually does have the capabilities. This may mean that if you are
using nginx directly to talk to such clients this bug may not be as
visible.

In this particular CDN, they follow the rfc and actually error if
the origin sends a compressed version instead of plain, and pass
that error to the client. So using the CDN, makes this problem more
visible.

You could try using the Map Module ngx_http_map_module module
like this:

map $http_accept_encoding $gzip_state {
default on;
~gzip;q=0 off;
}

— appa

Hi Igor, thanks for the quick patch. This patch is different than the
nginx-1.1.0 code. I assume the 1.1.0 code supersedes the patch as it
was posted later. If not, disregard.

The nginx-1.1.0 code has a bug in it. ( from a glance the patch would
have been ok )

nginx-1.1.0 does not handle the case for “gzip;q=1.0” correctly. It
should compress and it does not.

I see that the code is exhaustive in it’s validation of the 0.??? case,
but does not work for 1.0 1.00 1.000. ( just “1” is ok )

My quick hack to correct this was…

  • if (p == last || *p == ‘,’ || *p == ’ ’ ) {
  • if (p == last || *p == ‘,’ || *p == ’ ’ || *p == ‘.’) {

I took a quick log this morning from a production box of the most
frequent accept-encoding requests and the gzip;q=1.0 case is common.
In order of descending frequency…

Accept-Encoding: gzip, deflate
Accept-Encoding: gzip,deflate,sdch
Accept-Encoding: gzip,deflate
Accept-Encoding: gzip
Accept-Encoding: gzip, x-gzip
Accept-Encoding:
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Accept-Encoding: identity
Accept-Encoding: gzip;q=1.0, deflate;q=0.8, chunked;q=0.6,
identity;q=0.4, *;q=0
Accept-Encoding: deflate, gzip
Accept-Encoding: x-gzip, gzip, deflate
Accept-Encoding: identity,gzip,deflate
Accept-Encoding: gzip, deflate, x-gzip, identity; q=0.9
Accept-Encoding: gzip;q=1.0, deflate;q=0.8, chunked;q=0.6
Accept-Encoding: identity; q=1

Radek

Posted at Nginx Forum:

On Wed, Aug 03, 2011 at 02:17:15PM -0400, rburkat wrote:

Hi Igor, thanks for the quick patch. This patch is different than the
nginx-1.1.0 code. I assume the 1.1.0 code supersedes the patch as it
was posted later. If not, disregard.

The nginx-1.1.0 code has a bug in it. ( from a glance the patch would
have been ok )

The attached patch for 1.1.0 should fix the bug.
It’s already commited.