I have an nginx server deployed infront of a Internet Information
Services
6.0 server. The issue I have is that when requesting content from the
IIS
server via HTTPS I am seeing an considerable increase in reponse time
from
the upstream server. For example requesting the following javascript
file I
am seeing the following log entires (Using a custom log format to catch
upsteam response times)
GET /client/javascript/libraries/query-string/2.1.7/query-min.js
HTTP/1.1"
200 upstream 0.040 request 0.046 [for reversproxy via 172.25.50.203:80]
GET /client/javascript/libraries/query-string/2.1.7/query-min.js
HTTP/1.1"
200 upstream 0.234 request 0.243 [for reverseproxy via
172.25.50.203:443]
As you can see it’s taking up to an additional 200+ ms to serve up the
js
file to nginx when talking to the upsteam server via HTTPS. I understand
that there is an overhead with HTTPS but I wouldn’t expect it to be this
great.
I also understand it would be better to serve the static content
directly
from nginx but at the moment this isn’t an option.
I am currently running version 1.3.1 of nginx with the following
configure
arguments
I’m assuming all nginx ssl directives are for communication between the
client and nginx, Do I have any options for improving the https response
performance with the upsteam IIS server? Apart from talking to it via
HTTP?
Yes I suspected that it was somehow renegotiating the ssl handshake for
each
request where as firefox/firebug was caching the handshake thus showing
quicker response times.
Timing curl over https gave me an average of 80ms response time, timing
curl
over http gave me an average of 10ms similar to what nginx was achieving
talking to the backend via http.
I’m happy to annouce though that your were bang on the money with the
keepalive directive. As soon as I added that into my upstream
declaration
the reponse times dropped considerably and I’m now getting performance
similar to as if I was requesting the content directly from the upstream
server.
quicker response times.
declaration
you’ve switched it off with proxy_ssl_session_reuse[1] directive or
nginx mailing list [email protected] nginx Info Page
Thanks, Yes I thought it was strange that ssl session reuse didn’t work
either as I thought that had been enabled by default in a recent
release.
I can confirm that we don’t have the directive proxy_ssl_session_reuse
set
in any of the config files and we have left the upstream server caching
settings at their defaults which I think for IIS 6.0 is 5 minutes if I
remember correctly.
Yes your correct, I would agree that it’s probably not the best approach
to
be talking to a upstream server via HTTPS but unfortunatly at the moment
that’s not an option due to how the upstream applications work which
weren’t
written by me.
On Sun, Aug 19, 2012 at 11:06:22PM -0400, d2radio wrote:
I’m happy to annouce though that your were bang on the money with the
keepalive directive. As soon as I added that into my upstream declaration
the reponse times dropped considerably and I’m now getting performance
similar to as if I was requesting the content directly from the upstream
server.
Thanks Francis your a legend
Strange thing is that SSL session reuse doesn’t work for you. It
is on by default and should do more or less the same thing unless
you’ve switched it off with proxy_ssl_session_reuse[1] directive or
forgot to configure session cache on your backend server.
(Another question to consider is whether you really need to spend
resources on SSL between nginx and your backend.)