Need SSL state to be visible behind a double nginx proxy

Hi,

I have what I think is an obscure problem to which I cannot find a
solution in the nginx docs. Here’s a summary of the problem:

I have a content management system that I wrote (using Rails), and it
hosts several sites. Each site has SSL, and due to a policy limit by
my hosting provider, each virtual server (“slice”) can only have five
external IP addresses. This creates a problem in that the CMS that
was built to host lots of sites is limited to hosting only five. My
solution was to set up smaller slices in front of the main server, and
the small slices would run nothing but nginx. As such, the
“front-end” nginx instances would proxy requests to the “back-end”
nginx, and the “back-end” nginx would proxy requests through to the
Rails app server instances.

I saw a recent post on SSL pass-through, but this is not what I’m
looking for. As the slices are on the same network at my hosting
provider, I don’t care that the slice-to-slice communications are
encrypted. This is not an issue.

What is an issue is that the application (and thus the Rails app
server instances) need to be able to determine whether a request came
in using SSL or not. Because requests are proxied through two
instances of nginx running on separate slices, the back-end nginx
cannot see whether a request came in over http or https – because
either way, the request arrived at the back-end nginx over http (not
https). Here is a simple depiction of two requests – one over http
and one over https:

client ---http---> nginx ---http---> nginx ---> rails
client ---https--> nginx ---http---> nginx ---> rails

As you can see, it is the second (back-end) nginx instance in the
chain that tells the Rails app server instances whether the request
made to nginx was over http or https. And since all requests to the
second nginx instance are made over http, it passes this on to the
Rails app, which thinks all requests are made over http even if the
communication between the client and the first (front-end) nginx is
over https.

Also worth noting is that there are no SSL errors. If the request is
made over http, it works. And if the request is made over https, it
works. The app server just doesn’t know whether a request was made
over http or https.

I attempted to set the following on the back-end nginx:

proxy_set_header  X_FORWARDED_PROTO  $scheme;

but not surprisingly, it doesn’t work. ($scheme is either “http” or
“https”, taken directly from the request.) I may have a solution to
the problem, but it will require some extra plumbing work in Rails to
make it functional. What I am able to do is to set a custom header
and then read the value of that header in the Rails code. Here is
what I’ve tried:

proxy_set_header  HTTPS  on;

Then, in the Rails code, I can look for the HTTP_HTTPS environment
variable. (nginx apparently prepends anything sent to
proxy_set_header with “HTTP_”.) While this solution will probably
work, I would prefer to have nginx pass the request scheme to the app
servers without having to change the internals of Rails (by rewriting
the request.ssl? method).

This could be accomplished if nginx would let me conditionally set a
header based on the value of another header. I just can’t figure out
how to read a header. Here’s what I would like to do – keep the
“proxy_set_header HTTPS on;” (above) on the front-end nginx and use
the header on the back-end nginx – like this:

if ($HTTP_HTTPS = on) {
    proxy_set_header  X_FORWARDED_PROTO  https;
}

I realize that this means that anyone could arbitrarily add this
header to a request and trick the Rails server into thinking the
request was secure when it was not, but I would solve this by setting
“ignore_invalid_headers on;” or similar on the front-end nginx.

Can anyone tell me whether it is possible to read custom headers and
to act on them as in my example above? If not, I think this would be
a great addition.

Thanks in advance.

Nick P.

Hi Nick,
Why don’t you simplify your setup. You can have your Rails App
running
on your big server without needing nginx on it at all. All you do is
open up
the rails instances to allow access from your nginx frontends and then
proxy
the requests to it. Instead of having to proxy to nginx and then
proxying
again.

 Or the other option you can do is on your nginx server that proxy's 

to
your rails app. Have 2 server directives that listen on two different
ports.
Then when non-ssl traffic comes into your frontend server and redirect
it to
server 1, which has your standard proxy_pass. Then on your frontends
have
your ssl portion of the server redirect to the 2nd server port on the
backend that sets the X_FORWARDED_PROTO to https manually.

Rob

Hi Rob,
Thank you very much for the suggestions. The first one isn’t really an
option due to the caching that the CMS does (not, that is, without some
serious rework regarding how the cached files are written and deleted).
However, your second solution has a lot of promise. I do have one
question
on it, though.

Because of the caching, each site that runs on my CMS has its own nginx
config file, each with two server directives – one for http and one for
https. Using your solution, this would remain the same, except that
instead
of listening for port 443, the https server directive would listen on
some
other port. My initial tests show that I can use this for one server
directive (the http one):

server_name: www.domain.com;

and this for the other (the https one):

server_name: www.domain.com:1234;

I can’t find any documentation or examples that say whether specifying a
port with the host name in a server_name directive is valid. I’ll need
to
do some more testing to be sure it works as I believe it does. Do you
know
if this usage of host:port is valid for server_name? If so, then I
believe
your second solution will solve my problem. Thanks!

Nick

Hi,
I am not sure if you can use the port in the server_name directive. I
think you need to add listen directives.

Note i could be totally off on this but this is a very very simplistic
view of what i was trying to accomplish
http://pastie.org/private/xufufgttegqe9pc5qgea

Basically demo’ing 3 different server configs with 2 being your
“frontend” server’s for doing SSL and then having 1 server listening on
two ports and manually setting the protocal no the second when it is
passed onto rails.

V/r
Rob

Nick P. wrote:

Hi Rob,
Thank you very much for the suggestions. The first one isn’t really an
option due to the caching that the CMS does (not, that is, without some
serious rework regarding how the cached files are written and deleted).
However, your second solution has a lot of promise. I do have one
question
on it, though.

Because of the caching, each site that runs on my CMS has its own nginx
config file, each with two server directives – one for http and one for
https. Using your solution, this would remain the same, except that
instead
of listening for port 443, the https server directive would listen on
some
other port. My initial tests show that I can use this for one server
directive (the http one):

server_name: www.domain.com;

and this for the other (the https one):

server_name: www.domain.com:1234;

I can’t find any documentation or examples that say whether specifying a
port with the host name in a server_name directive is valid. I’ll need
to
do some more testing to be sure it works as I believe it does. Do you
know
if this usage of host:port is valid for server_name? If so, then I
believe
your second solution will solve my problem. Thanks!

Nick

I’m going to try tonight to get this working as you have suggested. I’m
hoping that I’ll be able to do it without using too many IPs, because
then
I’ll run into my original problem (the IP limit imposed by my hosting
provider). I believe your solution of listening on the same IP on
multiple
ports should work, though. I’ll just assign two listen ports on the
back-end nginx for each site – one for http and one for https. I
imagine
it’ll look something like this when I’m finished:

###  front-end nginx  ###

# main nginx config
http {
  upstream backend_server_http {
    server  10.10.1.1:2000;
  }
  upstream backend_server_https {
    server  10.10.1.1:2001;
  }
}

# front-end server (http) for domain.com
server {
  listen  80;
  server_name  domain.com;
  location / {
    proxy_pass  http://backend_server_http;
  }
}
# front-end server (https) for domain.com
server {
  listen  209.20.2.2:443;
  server_name  domain.com;
  ssl  on;
  location / {
    proxy_pass  http://backend_server_https;
  }
}

###  back-end nginx  ###

# main nginx config
http {
  upstream app_servers {
    server  0.0.0.0:3000;
    server  0.0.0.0:3001;
  }
}

# back-end server (http) for domain.com
server {
  listen  10.10.1.1:2000;
  server_name  domain.com;
  location / {
    proxy_pass  http://app_servers;
  }
}
# back-end server (https) for domain.com
server {
  listen  10.10.1.1:2001;
  server_name  domain.com;
  location / {
    proxy_set_header  X_FORWARDED_PROTO  https;
    proxy_pass  http://app_servers;
  }
}

I believe this is what you’ve described, and I also believe that it will
work. Requests for http://domain.com will be proxied upstream to
backend_server_http (at 10.10.1.1:2000), which will proxy to the Rails
app
servers with no X_FORWARDED_PROTO being set explicitly. Requests for
https://domain.com will be proxied upstream to backend_server_https (at
10.10.1.1:2001), which will proxy to the Rails app servers with the
X_FORWARDED_PROTO header being set explicitly to https.

Thanks again for the suggestion. I’ll send an e-mail back to this list
once
I’ve given this a try.

Nick

Thanks for the suggestion. I gave it a try, but it didn’t work. Is
there a difference between the two in the way the upstream server
handles X_FORWARDED_PROTO and X-FORWARDED_PROTO? I’m using the
version with the underscore instead of the hyphen, and it works when
the client machine hits the back-end nginx server directly.

I know this because I hit the five-IP limit after setting up four
sites on the slice, two of which are running on the CMS app I wrote
(the other two are admin apps that use SSL). The SSL detection in the
Rails code works fine for those first two sites, but the sites that
are proxied through another front-end nginx instance on a separate
slice are the ones where the SSL detection does not work.

Nick

On Thu, Oct 30, 2008 at 03:29:47PM -0500, Nick P. wrote:

I’m going to try tonight to get this working as you have suggested. I’m
hoping that I’ll be able to do it without using too many IPs, because then
I’ll run into my original problem (the IP limit imposed by my hosting
provider). I believe your solution of listening on the same IP on multiple
ports should work, though. I’ll just assign two listen ports on the
back-end nginx for each site – one for http and one for https.

no. for each ssl https you want visible to world you have to have one
ip,
unless off course you want to tell your visitors/costumers that they
enter
https://your-site.com:444 to enter the site.

Hi Almir,

You’re correct that nginx should listen on 80 for http and 443 for
https,
and that’s what I show in my example for the front-end servers. It’s
the
back-end nginx that is listening on arbitrary ports. The back-end nginx
will not be accessed directly from any client computers – all requests
will
be proxied through the front-end servers first. (Please see my original
post for an explanation of why this is necessary.)

Nick

Well with this setup (which i thought you were looking for from the
first article) you can put as many frontend slices as you need which can
support 5 IP addresses each for 5 https sites. You only need the two
listen directives on the back end nginx to allow it to manually set the
X_FORWARDED_PROTO so your rails will know which type of connection it
came from.

Yes, and that’s my plan exactly. The only reason I need to listen on
two
separate ports for each site is that each site caches its content
independently, which means that nginx has to be able to look for the
cached
content and server that up without ever touching Rails. So, for two
sites
to be able to each have a cached index.html file (as well as static
image
files), I have to have a site-specific path in each server directive.

For instance, consider the following:

server {
  listen  80;
  server_name  site-a.com  *.site-a.com;  # needs to be 

site-specific
root /var/www/site-a;
location / {
# serve static files
if (-f $document_root$uri.html) {
rewrite (.) $1.html break;
break;
}
# serve cached pages directly
if (-f
$document_root/…/…/…/current/tmp/cache/content/site-a/$uri.html) {
rewrite (.
)
$document_root/…/…/…/current/tmp/cache/content/site-a/$1.html break;
}
}
}

I realize I could set the root to “/var/www” (and drop the “/site-a”),
then
use the $host or $http_host variable in my static/cache paths, but my
CMS
supports *.domain.com-style vhosts, which can’t be represented on the
file
system. If I drop the *.domain.com-style vhost support, then I could
have
paths like /var/www/site-a.com with symlinks pointing to it (like
/var/www/
www.site-a.com → /var/www/site-a.com).

Even if I could figure out a good way to represent this on the file
system,
the CMS (and my nginx config for serving static and cached content)
supports
serving different files for a request to the same site based on the
requested host. This is useful (and is actually being used) for a
company
with multiple locations that wants a site tailored to each location.
For
instance, when you request site-a.com, you see the home page with the
address and phone number for the company’s primary location in the
header.
Requesting site-b.com shows the exact same home page except that the
header
now has the address and phone number for the company’s secondary
location.
Similarly, a slightly different logo image can be served for site-b.com,
even though both images are at /images/logo.gif. As such, simply
symlinking
/var/www/site-b.com to point to /var/www/site-a.com would break this
functionality.

I still think the original solution will work – I’ll just have to have
two
server directives on the back-end nginx for each site (one for http, and
one
for https). This isn’t a problem, as this is how it works now – only
now,
the backend nginx uses server_name to choose the proper server directive
whereas with the new solution it will use an internal IP and port number
to
do the same thing.

Nick

On 07/11/2008, at 1:26 AM, Nick P. wrote:

The significance of listening on multiple ports is that the back-end
nginx can tell the Rails app that requests to port 4000 server were
originally made over http and that requests to port 4001 were
originally made over https. I’ll attempt to illustrate here (this
won’t look right without a fixed-width font).

That shouldn’t be necessary,

 proxy_set_header X_FORWARDED_PROTO https;

Is sufficient to tell rails that the request is secure. So this config
on the backend server should be sufficent

server {
# backend http
listen 4000;

 proxy_pass http://rails:3000;

}

server {
# backend https
listen 4001;

 proxy_set_header X_FORWARDED_PROTO https;

 proxy_pass http:/rails:3000;

}

If the backend nginx will pass X_FORWARDED_PROTO from the front end
server, the above shouldn’t be necessary either.

Cheers

Dave

You’re right that using “proxy_set_header X_FORWARDED_PROTO https;” is
all
that’s needed to tell Rails that the request is secure. Your back-end
server configs are what I used. The reason for listening on multiple
ports
is that the back-end nginx does not pass the X_FORWARDED_PROTO header
(from
the front-end nginx) through to Rails.

I’m not quite clear on which part isn’t necessary, though. Your example
config is essentially what I’m using now.

Thanks,
Nick

I just wanted to let everyone know that I tried the method Rob Shultz
suggested, and it works. In case it helps someone in the future, I’ll
sum
up the problem and solution here.
My setup and requirements were as follows:

  • custom CMS implemented with Rails
  • single app instance serves many websites
  • each website uses http and https
  • each website has an SSL certificate for its own domain, so each
    needs
    its own dedicated IP
  • hosting company policy limits me to five IP addresses per slice
  • each site has its own static file directory
  • CMS makes heavy use of caching (each site has its own cache
    directory)

The problem was that in order to realize decent economies of scale, I
needed
to be able to host far more than five websites on my slice. In
addition,
for reasons of cost and convenience, I only wanted to maintain a single
instance of the Rails app rather than running a copy of it on multiple
servers. The IP address limit of five per slice was a killer.

The solution to the problem was to set up nginx on multiple front-end
slices
which would each proxy requests to nginx on a single back-end slice,
which
would itself then proxy those requests to the Rails app. I got this set
up,
and it worked great, except for one thing. Because the requests to the
back-end nginx are always received from the front-end nginx over http
(even
when the client request to the front-end nginx is over https), the Rails
app
was unable to tell whether the client’s original request was over http
or
https. (I couldn’t simply proxy from the front-end nginx directly to
the
back-end Rails app due to the app’s caching requirements.)

Per Rob’s suggestion, I configured two server directives per site on the
back-end nginx – one for http and one for https. However, since the
back-end nginx wouldn’t be talking to clients directly, it does not need
to
listen on ports 80 or 443. Instead, it can listen on arbitrary port
numbers
as long as the front-end nginx instances know what those port numbers
are.
As such, the two server directives for the website one.com on the
back-end
nginx can listen on ports 4000 (for http) and 4001 (for https). The
server
directives for one.com on the front-end nginx will proxy requests for
http://one.com (port 80) to the back-end nginx on port 4000, and it will
proxy requests for https://one.com (port 443) to the back-end nginx on
port
4001.

The significance of listening on multiple ports is that the back-end
nginx
can tell the Rails app that requests to port 4000 server were originally
made over http and that requests to port 4001 were originally made over
https. I’ll attempt to illustrate here (this won’t look right without a
fixed-width font).

                 front-end                      back-end

one.com ±---------+ ±----------+
request —http—> | port 80 | —port:4000—> | port 4000 |
—proto:http—> ±----------+
\ | | | |
| Rails app |
--https–> | port 443 | —port:4001—> | port 4001 |
—proto:https–> ±----------+
±---------+ ±----------+

Only one front-end slice is shown, and it is shown only for one site,
but
this should give you an idea of how this can be expanded. The front-end
slice can (in my case) be expanded to host five sites, and more slices
can
be added as needed.

Of course, to ensure the back-end server isn’t tricked, it’s necessary
to
make sure the server is set up to listen on ports 4000 and 4001 only
from
the front-end servers.

If anyone else runs into a similar IP limitation or has some other need
to
proxy http and https traffic through two instances of nginx, I hope this
helps you out.

Nick