Blocking unknown hostnames for SSL/TLS connections

Hello,

We’re currently using nginx for SSL/TLS termination, which then
proxies the request to a pair of internal pair of load balancers.
Since the TLS handshake is performed before nginx is able to figure
out what hostname is being requested, except in cases where SNI is
used, it will accept any request for any hostname and pass it along
to our internal load balancers. This puts us in a situation where
internal resources are allowed to be exposed externally, although in
a roundabout way.

For example, our internal load balancers have a pool called “news”,
which is accessible via news, or news.dc1.example.com and is intended
to be internally accessible only. If you were to add our external IP
address mapped to news.dc1.example.com and told curl to ignore the
invalid cert, nginx will proxy this request along to our internal
load balancers and the internal service will happily respond. Here’s
a curl example of this hitting the internal healthcheck endpoint:

curl -k https://news.dc1.example.com
alive

Ideally this would be blocked at our ingress point, which is nginx.

The only way around this that I’ve found so far is to inspect the
$host variable in the server definition for the 443 blocks. The
example below shows the check for the server block which is intended
to respond to www.example.com and stg1.example.com only:

  # if the request coming in doesn't match any of the hosts we know
  # about, throw a 301 and rewrite to the default server.
   if ($host !~ (^www.example.com$|^stg1.example.com$)) {
     return 301 https://stg1.example.com;
   }

In our production environment we have a wildcard cert that covers as
many as 6 externally available resources, so I am concerned with the
performance hit of doing a check on every host.

Is there a preferred method of dealing with an issue like this? I’ve
read through the config pitfalls page on the readthedocs.org[0] page,
and the If Is Evil page[1], so I am pretty positive the solution
above is very inefficient. The pitfalls page even talks about the
preferred alternative to an if statement for hostname matching[2],
but this does not appear to cover TLS connections. Is there any other
documentation that talks about this or could be useful?

Is this something we need to just solve at the internal load balancer
level? Doing a check there to ensure that some pools are accessible
to external resources and others are not? I imagine that will just
shift the inefficiency from nginx to the load balancers.

Thanks!

-pat

0 -
http://ngx.readthedocs.org/en/latest/topics/tutorials/config_pitfalls.html
1 - https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
2 -
http://ngx.readthedocs.org/en/latest/topics/tutorials/config_pitfalls.html#server-name-if

On Thursday 03 December 2015 11:41:51 Patrick O’Brien wrote:

alive

if the request coming in doesn’t match any of the hosts we know

Is there a preferred method of dealing with an issue like this? I’ve
read through the config pitfalls page on the readthedocs.org[0] page,
and the If Is Evil page[1], so I am pretty positive the solution
above is very inefficient. The pitfalls page even talks about the
preferred alternative to an if statement for hostname matching[2],
but this does not appear to cover TLS connections. Is there any other
documentation that talks about this or could be useful?

[…]

http://nginx.org/en/docs/http/server_names.html
http://nginx.org/en/docs/http/request_processing.html

It’s as simple as:

ssl_certificate example.com.crt;
ssl_certificate_key example.com.key

server {
listen 443 ssl default_server;
return 301 https://stg1.example.com;
}

server {
listen 443 ssl;
server_name stg1.example.com www.example.com …;

  location / {
      proxy_pass ...;
  }

}

wbr, Valentin V. Bartenev

On Thu, Dec 3, 2015 at 1:44 PM, Valentin V. Bartenev [email protected]
wrote:

a roundabout way.
curl -k https://news.dc1.example.com


Is there a preferred method of dealing with an issue like this? I've
read through the config pitfalls page on the readthedocs.org[0] page,
and the If Is Evil page[1], so I am pretty positive the solution
above is very inefficient. The pitfalls page even talks about the
preferred alternative to an if statement for hostname matching[2],
but this does not appear to cover TLS connections. Is there any other
documentation that talks about this or could be useful?

[…]

Hi Valentin,

  listen 443 ssl default_server;

}

It looks like this works unless if you have multiple ssl server
definitions which require different certs. Here is what we ended up
with (more or less):

rewrite everything on 80 to https

server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
proxy_set_header Host $host;
}

server definition for www

server {
listen 443 ssl;
server_name www.example.com example.com;
access_log /var/log/nginx/ssl.access.log;
error_log /var/log/nginx/ssl.error.log;

ssl_certificate /etc/nginx/ssl/www.combined;
ssl_certificate_key /etc/nginx/ssl/www.key;

}

server definition for wildcard

server {
listen 443 ssl;
server_name foo.example.com bar.example.com;
access_log /var/log/nginx/ssl.access.log;
error_log /var/log/nginx/ssl.error.log;

ssl_certificate /etc/nginx/ssl/wildcard.combined;
ssl_certificate_key /etc/nginx/ssl/wildcard.key;

}

catch all to force unknown hostnames to www

server {
listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/nginx/ssl/www.combined;
ssl_certificate_key /etc/nginx/ssl/www.key;
return 301 https://www.example.com;
}

I did some spot checking via curl and chrome and everything appears to
be working how I expect it to.

-pat

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs