SSL pass through

Hello,

I would like to use NGINX as a reverse proxy and pass https requests to
a
back-end server without having to install certificates on the NGINX
reverse
proxy because the backend servers are already set up to handle https
requests.
How would the configuration look like for this purpose?

Thank you

Posted at Nginx Forum:

On Wed, Jan 02, 2013 at 12:18:33PM -0500, zuger wrote:

Hi there,

I would like to use NGINX as a reverse proxy and pass https requests to a
back-end server without having to install certificates on the NGINX reverse
proxy because the backend servers are already set up to handle https
requests.

What you are describing sounds more like a tcp port forwarder than a
reverse proxy to me.

How would the configuration look like for this purpose?

Do not have “listen 443” or “ssl on” in the nginx.conf. Let your
separate
port forwarder listen on port 443 and tunnel the data straight to your
back-end server.

nginx.conf for http will be like pretty much any examples you can find.

f

Francis D. [email protected]

Thank you for the quick answer. I will be a little more precise.

I would like to forward https requests to different backend server based
on
the hostname header, e.g. https://machine1.domain.com should be
forwarded to
https://10.0.0.1 and https://machine2.domain.com to https://10.0.0.2.

You mentioned something like a tcp port forwarder. Is this tcp port
forwarding part of the NGINX configuration or something outside NGINX?

Posted at Nginx Forum:

On 2 January 2013 21:14, zuger [email protected] wrote:

Thank you for the quick answer. I will be a little more precise.

I would like to forward https requests to different backend server based on
the hostname header, e.g. https://machine1.domain.com should be forwarded to
https://10.0.0.1 and https://machine2.domain.com to https://10.0.0.2.

You can’t do this HTTP-level routing inside nginx without allowing
nginx to terminate the SSL connection, which would require the
certificates to be available to nginx at startup/reload.

Have a read of NameBasedSSLVHosts - HTTPD - Apache Software Foundation for a
decent discussion of the generic (HTTPd-agnostic) possibilities and
problems.

You mentioned something like a tcp port forwarder. Is this tcp port
forwarding part of the NGINX configuration or something outside NGINX?

I would personally use HAProxy in TCP mode for this purpose, however
there’s a non-trivial operational/PCI-DSS/code problem that crops up
when you don’t terminate your SSL at network edge: you lose
visibility of the client’s IP address at the point at which you do
terminate the SSL. You lose this visibility regardless of any
X-Forwarded-For headers you might use. The HAProxy “PROXY” protocol is
a possible fix for this, but it’s not yet available in a stable
release of HAProxy.

Basically, terminate your SSL at the edge. Or get people who
understand your problem/app domain, SSL, and security to design a
solution for you.

Cheers,
Jonathan

Jonathan M. // Oxford, London, UK
http://www.jpluscplusm.com/contact.html

On 2 January 2013 22:12, zuger [email protected] wrote:

Thank you Jonathan.

Your explanations were very helpful and the link to “NameBasedSSLVHosts”
also.

Glad it helped, Zuger.

I will now evaluate the two scenarios. Teminate SSL in NGINX and forward
http to the backend servers or use HAProxy.

SSL termination at the edge (I suggest in nginx) will save you much
grief, over time. I would only be considering passing SSL through to a
back-end layer if I had to for specific security reasons, such as
PCI-DSS compliance or because the machine at the network edge was
untrusted somehow.

Do note: with nginx you can proxy_pass to a different SSL FQDN,
after having terminated the SSL connection. I.e.

server {
listen 443;
server_name external-domain.com

ssl cert config options which I can’t remember off the top of my

head …
location / {
proxy_pass
https://my-internal-service-name-which-is-still-ssl-encrypted.internal.fqdn:443;
}
}

This way, you unwrap the SSL for long enough to route it correctly,
but then encrypt it again to ensure the communication between nginx
and the backend service is secure. This still requires the cert/key
for “external-domain.com” on the nginx server, however.

Do be aware that this setup won’t allow you to exclude the nginx
machine from being part of your PCI-DSS CDE, I believe. (If that was
meaningless to you, just ignore it!)

Also be aware that, if your nginx machine is actually untrusted, this
doesn’t help. Any attacker who gets control of the box still gets
access to your certs and can sniff any “SSL” traffic s/he likes.

Did I understood correctly that when I use HAProxy I do not have to
terminate SSL at HAProxy server? SSL will then be terminated at the backend
servers?

[ NB: I’m only suggesting HAP as that’s what I’d use in the scenario
you painted. Other TCP-Level Load Balancers Are Available. ]

HAProxy only learned to speak SSL in a recent-ish development version.
If you need to use a stable release (1.4) then you cannot terminate
SSL with it, and would have to pass the TCP connection through to
something that owned the appropriate SSL certificates.

HTH,
Jonathan

Jonathan M. // Oxford, London, UK
http://www.jpluscplusm.com/contact.html

Thank you Jonathan.

Your explanations were very helpful and the link to “NameBasedSSLVHosts”
also.

I will now evaluate the two scenarios. Teminate SSL in NGINX and forward
http to the backend servers or use HAProxy.

Did I understood correctly that when I use HAProxy I do not have to
terminate SSL at HAProxy server? SSL will then be terminated at the
backend
servers?

zuger

Posted at Nginx Forum: