How to turn off gzip compression for SSL traffic

Hi,

As you know, due the breach attack (http://breachattack.com), HTTP
compression is no longer safe (I assume nginx don’t use SSL compression
by
default?), so we should disable it.

Now, We are using config like the following:

gzip on;
..

server {
    listen 127.0.0.1:80 default_server;
    listen 127.0.0.1:443 default_server ssl;

With the need to split into two servers section, is it possible to turn
off
gzip when we are using SSL?

Thanks

On Aug 17, 2013, at 8:59 , howard chen wrote:

Hi,

As you know, due the breach attack (http://breachattack.com), HTTP compression
is no longer safe (I assume nginx don’t use SSL compression by default?), so we
should disable it.

Yes, modern nginx versions do not use SSL compression.

With the need to split into two servers section, is it possible to turn off gzip
when we are using SSL?

You have to split the dual mode server section into two server server
sections and set “gzip off”
SSL-enabled on. There is no way to disable gzip in dual mode server
section, but if you really
worry about security in general the server sections should be different.

Hi,

Thanks for the insight.

Finally I solved by:

if ($scheme = https) {
gzip off;
}

Separating into two servers require to duplicate the rules like rewrite,
which is cumbersome.

Thanks anyway

I thought that “if” statements slowed nginx down?

Igor S. Wrote:

Yes, modern nginx versions do not use SSL compression.
[…]
You have to split the dual mode server section into two server server
sections and set “gzip off”
SSL-enabled on. There is no way to disable gzip in dual mode server
section, but if you really
worry about security in general the server sections should be
different.

If modern versions do not use ssl compression why split a dual mode
server?
If gzip is on in the http section, what happens then to the ssl section
of a
dual mode server?

Posted at Nginx Forum:

On 18 August 2013 18:09, itpp2012 [email protected] wrote:

If modern versions do not use ssl compression why split a dual mode server?
If gzip is on in the http section, what happens then to the ssl section of a
dual mode server?

+1

This discussion started regarding concerns about the BREACH, which (if
you
documented about it) attacks SSL-encrypted HTTP-level-compressed data,
thus
implying the discussion around gzip.

B. R.

I think you mistake ssl/tls level compression with gzip http
compression,
both are different.

If you put gzip in http section, all server sections under this http
will
inherits this gzip config.

This is why Igor recommends you to split the server config for SSL and
non-SSL, and put ‘gzip on’ only at the non-SSL one.

On Mon, Aug 19, 2013 at 12:15 AM, Jonathan M.
<[email protected]

On Sun, Aug 18, 2013 at 12:31 PM, Paul N. Pace [email protected]
wrote:

subsequent HTTPS blocks (I separate HTTP from HTTPS) I have
‘gzip_vary’ off. Am I doing it right?

‘gzip_vary’ was supposed to be ‘gzip’

Igor said:

You have to split the dual mode server section into two server server sections
and set “gzip off”
SSL-enabled on. There is no way to disable gzip in dual mode server section, but
if you really
worry about security in general the server sections should be different.

Adie said:

This is why Igor recommends you to split the server config for SSL and non-SSL,
and put ‘gzip
on’ only at the non-SSL one.

So I can be clear, I have ‘gzip_vary on’ in my http block and in
subsequent HTTPS blocks (I separate HTTP from HTTPS) I have
‘gzip_vary’ off. Am I doing it right?

I think we could all benefit from a nginx recommendation on using gzip
with
single and dual mode server sections regarding a hardening approach
against
breach. Maxim?

Posted at Nginx Forum:

On Aug 18, 2013, at 21:09 , itpp2012 wrote:

If modern versions do not use ssl compression why split a dual mode server?
If gzip is on in the http section, what happens then to the ssl section of a
dual mode server?

These are different vulnerabilities: SSL compression is subject to
CRIME vulnerability while HTTP/SSL compression is subject to BREACH
vulnerability.


Igor S.

Hello,

On Sun, Aug 18, 2013 at 4:48 PM, itpp2012 [email protected] wrote:

I think we could all benefit from a nginx recommendation on using gzip with
single and dual mode server sections regarding a hardening approach against
breach. Maxim?

​As Igor advised, 2 different servers to server HTTP & HTTPS requests
are
preferred:

server {
listen 80;
server_name inter.net

include inter.net_shared_http_https_content.conf
# Conf specific to HTTP content delivery here

}

server {
listen 443;
server_name inter.net

include inter.net_shared_http_https_content.conf
# Conf specific to HTTPS content delivery here

}

If you read the conf for the gzip directive, you’d notice that gzip
directive default value is ‘off’, so if you don’t mention ‘gzip on’
anywhere in your conf tree for the considered servers, there’ll be no
HTTP
compression.
Thus, if you kept your server configuration minimal and didn’t
explicitely
activated gzip compression somewhere, you are safe by default.

You couldn’t be safier as the only way you are exposed would it be due
to a
lack of control/understanding of directives you explicitely put into
your
server(s) configuration.

B. R.

On Mon, Aug 19, 2013 at 12:41 AM, Igor S. [email protected] wrote:

These are different vulnerabilities: SSL compression is subject to
CRIME vulnerability while HTTP/SSL compression is subject to BREACH
vulnerability.

​Incorrect.

CRIME attacks a vulnerability in the implementation of SSLv3 and TLS1.0​
using CBC flaw: the IV was guessable. Hte other vulnerability was a
facilitator to inject automatically ​arbitrary content (so attackers
could
inject what they wish to make their trail-and-error attack).
CRIME conclusion is: use TLS v1.1 or later (not greater than v1.2 for
now).

BREACH attacks the fact that compressed HTTP content encrypted with SSL
makes it easy to guess a known existing header field from the request
that
is repeated in the (encrypted) answer looking at the size of the body.
BEAST conclusion is: don’t use HTTP compression underneath SSL
encryption.

B. R.

On Aug 18, 2013, at 14:27 , howard chen wrote:

Hi,

Thanks for the insight.

Finally I solved by:

if ($scheme = https) {
gzip off;
}

This does not work on server level. And on location level it may work in
wrong way.

Separating into two servers require to duplicate the rules like rewrite, which
is cumbersome.

I believe that dual mode server block may be subject to vulnerabilities
due to site map,
so BREACH is the least of them.


Igor S.

On Mon, Aug 19, 2013 at 2:04 AM, Igor S. [email protected] wrote:

​You’re right. I mixed up things…​


B. R.

On Aug 19, 2013, at 9:56 , B.R. wrote:

On Mon, Aug 19, 2013 at 12:41 AM, Igor S. [email protected] wrote:

These are different vulnerabilities: SSL compression is subject to
CRIME vulnerability while HTTP/SSL compression is subject to BREACH
vulnerability.

​Incorrect.

CRIME attacks a vulnerability in the implementation of SSLv3 and TLS1.0​ using
CBC flaw: the IV was guessable. Hte other vulnerability was a facilitator to
inject automatically ​arbitrary content (so attackers could inject what they wish
to make their trail-and-error attack).
CRIME conclusion is: use TLS v1.1 or later (not greater than v1.2 for now).

You probably mix up it with BEAST.

B.R. Wrote:

BREACH attacks the fact that compressed HTTP content encrypted with
SSL
makes it easy to guess a known existing header field from the request
that
is repeated in the (encrypted) answer looking at the size of the body.
BEAST conclusion is: don’t use HTTP compression underneath SSL
encryption.

No, the conclusion is: don’t echo back values supplied by the requester
as
trusted in your application code. This is the most basic of
anti-injection
protections. BREACH is the result of an application-layer problem, and
needs
to be solved there. Why would you ever echo arbitrary header or form
input
back to the requester alongside sensitive data?

A huge number of established security best practices prevent the BREACH
attack at the application layer; a man-in-the-middle as well as an
exploitable XSS/CSRF vulnerability is needed to even get the attack
started.
Fix those issues first. Also, you should likely be rate-limiting
responses
by session at your back-end to prevent DoS attacks. For the extra
paranoid,
randomly HTML-entity-encode characters of any user data supplied before
echoing it back in a response, and add random padding of random length
to
the HEAD of all responses.

At the nginx layer, some sensible rate limits might also be an
appropriate
mitigation: thousands-to-millions of requests are needed to extract
secret
data with BREACH.

I haven’t seen Google or any other large web site turn of gzip
compression
of HTTPS responses yet because of BREACH. If you can actually afford
to do
so, your traffic level is simply trivial. We would see approximately an
8x
increase in bandwidth costs (and corresponding 8x increase in end-user
response time) if we disabled GZIP for HTTPS connections.

Posted at Nginx Forum:

On Tue, Aug 20, 2013 at 5:12 PM, rmalayter [email protected] wrote:

attack at the application layer; a man-in-the-middle as well as an
data with BREACH.

I haven’t seen Google or any other large web site turn of gzip compression
of HTTPS responses yet because of BREACH. If you can actually afford to
do
so, your traffic level is simply trivial. We would see approximately an 8x
increase in bandwidth costs (and corresponding 8x increase in end-user
response time) if we disabled GZIP for HTTPS connections.

​I took a shortcut.
You’re right: deactivating gzip compression is usable only for
relatively
small websites.

Anyway, I wonder which real-world scenario need to send back user
requests
in its answers… maybe some application need this? I can’t imagine a
serious use-case however.

For a quick cheat-sheet on possible mitigations, starting with the most
radical ones, some advice has already been provided here:
BREACH ATTACK.

I maintain the ‘turn the gzip compression off’ piece of advice here, as
I
suspect people managing HA or populated websites already understand the
problem deeper and don’t need to ask on a specific webserver’s mailing
list
what ‘recommendation’ they provide… Thus I guess What I wrote fits the
audience here.​

B. R.