I was wondering if there was any interest in extending the FastCGI and
SCGI implementations in Nginx to allow a TLS encryption to the
application backend?
Currently if you have Nginx on one machine and your FastCGI / SCGI
application on another machine then communications between the two will
be unencrypted. Of course you can use something like stunnel (which
someone on this list told me about helpfully a while ago) to encrypt the
communications but that seems a bit messy. If Nginx supported TLS
encryption natively then applications using FastCGI or SCGI could be
upgraded to take advantage of that fact if the developer thought it was
worthwhile.
I can’t be the only person who wants a 100% encrypted connection from
the browser to Nginx to the FastCGI application to the database.
2013/01/18 17:45:13 +0000 Some D. [email protected] =>
To [email protected] :
SD> be unencrypted. Of course you can use something like stunnel (which
SD> someone on this list told me about helpfully a while ago) to encrypt
the
SD> communications but that seems a bit messy. If Nginx supported TLS
*CGI interfaces look to be the previous century demand. HTTP(S) seems to
be
the trend, even for the newly developed databases.
What’s messy with your ‘stunnel’? Why shouldn’t you use the ‘nginx’ on
the
backend side with https as an uplink protocol? The your ‘fastcgi client’
nginx
should use then the ‘nginx on a backend side’ as an https upstream.
SD> I can’t be the only person who wants a 100% encrypted connection
from
SD> the browser to Nginx to the FastCGI application to the database.
Are there any other web server(s) having this feature implemented then?
2013/01/18 17:45:13 +0000 Some D. [email protected] => To [email protected] :
SD> be unencrypted. Of course you can use something like stunnel (which
SD> someone on this list told me about helpfully a while ago) to encrypt the
SD> communications but that seems a bit messy. If Nginx supported TLS
*CGI interfaces look to be the previous century demand. HTTP(S) seems to be
the trend, even for the newly developed databases.
Unfortunately there really isn’t much that you can use instead of
FastCGI or SCGI if you want to be able to host multiple applications
using different languages in a consistent manner.
What’s messy with your ‘stunnel’? Why shouldn’t you use the ‘nginx’ on the
backend side with https as an uplink protocol? The your ‘fastcgi client’ nginx
should use then the ‘nginx on a backend side’ as an https upstream.
I’m not sure I completely understand your point here. Are you suggesting
that you just run a simple Nginx server on the application so that the
front end Nginx server can just pass the requests to the Nginx on the
application server via HTTPS and then the local Nginx server just passes
the requests on to the application server on 127.0.0.1?
SD> I can’t be the only person who wants a 100% encrypted connection from
SD> the browser to Nginx to the FastCGI application to the database.
Are there any other web server(s) having this feature implemented then?
SD> that you just run a simple Nginx server on the application so that the
SD> front end Nginx server can just pass the requests to the Nginx on the
SD> application server via HTTPS and then the local Nginx server just passes
SD> the requests on to the application server on 127.0.0.1?
Short answer: yes.
127.0.0.1 or local socket or DMZ neighbor (the whatever).
What’s wrong with stunnel then?
Nothing is wrong with stunnel other than it adds extra complexity to
your deployment. It would be nice if Nginx could handle this on its own.
It clearly already can due to its support of HTTPS on the browser side
so I can’t imagine it would be very hard to add support on the FastCGI
or SCGI side.
2013/01/21 07:07:46 +0000 Some D. [email protected] =>
To [email protected] :
SD> On 20/01/13 15:10, Peter Vereshagin wrote:
SD> > 2013/01/18 17:45:13 +0000 Some D. [email protected] => To [email protected] :
SD> > What’s messy with your ‘stunnel’? Why shouldn’t you use the
‘nginx’ on the
SD> > backend side with https as an uplink protocol? The your ‘fastcgi
client’ nginx
SD> > should use then the ‘nginx on a backend side’ as an https
upstream.
SD>
SD> I’m not sure I completely understand your point here. Are you
suggesting
SD> that you just run a simple Nginx server on the application so that
the
SD> front end Nginx server can just pass the requests to the Nginx on
the
SD> application server via HTTPS and then the local Nginx server just
passes
SD> the requests on to the application server on 127.0.0.1?
Short answer: yes.
127.0.0.1 or local socket or DMZ neighbor (the whatever).
What’s wrong with stunnel then?
I have my interest as an author of ‘fcgi_spawn’ for perl ‘cgi alike’
apps:
I was wondering if there was any interest in extending the FastCGI and
SCGI implementations in Nginx to allow a TLS encryption to the
application backend?
Currently if you have Nginx on one machine and your FastCGI / SCGI
application on another machine then communications between the two will
be unencrypted.
Use Nginx on machine A and Nginx on machine B.
Then, on machine B, use FastCGI/SCGI/uWSGI to talk with your
applications.
[…]
Manlio P.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
2013/01/21 11:15:51 +0000 Some D. [email protected] =>
To [email protected] :
SD> On 21/01/13 07:31, Peter Vereshagin wrote:
SD> > 2013/01/21 07:07:46 +0000 Some D. [email protected] => To [email protected] :
SD> > SD> On 20/01/13 15:10, Peter Vereshagin wrote:
SD> > SD> > 2013/01/18 17:45:13 +0000 Some D. [email protected] => To [email protected] :
SD> > SD> > What’s messy with your ‘stunnel’? Why shouldn’t you use the
‘nginx’ on the
SD> > SD> > backend side with https as an uplink protocol? The your
‘fastcgi client’ nginx
SD> > SD> > should use then the ‘nginx on a backend side’ as an https
upstream.
SD> > SD>
SD> > SD> I’m not sure I completely understand your point here. Are you
suggesting
SD> > SD> that you just run a simple Nginx server on the application so
that the
SD> > SD> front end Nginx server can just pass the requests to the Nginx
on the
SD> > SD> application server via HTTPS and then the local Nginx server
just passes
SD> > SD> the requests on to the application server on 127.0.0.1?
SD> >
SD> > Short answer: yes.
SD> >
SD> > 127.0.0.1 or local socket or DMZ neighbor (the whatever).
SD> >
SD> > What’s wrong with stunnel then?
SD>
SD> Nothing is wrong with stunnel other than it adds extra complexity to
SD> your deployment. It would be nice if Nginx could handle this on its
own.
SD> It clearly already can due to its support of HTTPS on the browser
side
SD> so I can’t imagine it would be very hard to add support on the
FastCGI
SD> or SCGI side.
It’s fine only for the smaller half of cases when the backend has only
one
application (fcgi or scgi) server per host.
Back in time when one application server was handling several
application(s)
this did more sense. But this just doesn’t seem to be a web applications
architecture trend any more.
Adding more application to the typical nginx consumer’s backend means
adding
more application servers therefore more ports/sockets to listen.
The more ports to listen on the outer network means more complication(s)
e.
g., firewall and encryption on each of them, from both frontend and a
backend
sides.
At the same time being backed by nginx (or backing nginx) those daemons
should
feel better with outer network instabilities, e. g., avoiding ‘slow
client
problem’ that may happen between frontend and backend hosts keeping from
use
of the full potential of the application servers and so on.
I believe it’s not hard to implement encryption in the nginx fcgi/scgi
client,
just think it’s not a future targeting and can decrease the growth of
installations number, on backends particularly.