Windows nginx 1.7.7.2 Gryphon

22:55 15-10-2014 nginx 1.7.7.2 Gryphon

Tell me a story and I’ll tell you my history. The Mock Turtle and the
Gryphon
are here to stay. What! Never heard of uglifying! If you don’t know what
to
uglify is, you are a simpleton so you’d better get on your way.
The nginx Gryphon release is here!

Based on nginx 1.7.7 (15-10-2014, last changeset 5876:973fded4f461)
with;

  • Openssl-1.0.1j (CVE-2014-3513, CVE-2014-3567, SSL 3.0 Fallback
    protection,
    CVE-2014-3568)
  • lua-nginx-module v0.9.13rc1 (upgraded 15-10-2014)
  • Source changes back ported
  • Source changes add-on’s back ported
  • Changes for nginx_basic: Source changes back ported
  • Scheduled release: no (openssl fixes)
  • Additional specifications: see ‘Feature list’

Builds can be found here:

Follow releases https://twitter.com/nginx4Windows

Posted at Nginx Forum:

Hey Itpp2012 thanks for another fantastic build <3! :smiley:

I have a bit of a question to do with PHP running with your builds.

So i run a site in the top 20,000 sites on windows ofcourse using your
builds and today i had a big influx in traffic not a DDoS but more than
PHP
could handle it seems.

So i have increased the number of PHP process created to 100. (Before it
was
50) But with just 50 php processes i kept getting time outs and i
checked my
concurrent connections on each server and all 3 of them where at almost
1000
each.

How much traffic can i take roughly with 100 php process running behind
Nginx perhaps i should rescale it to be 1000 PHP processes for overkill

:slight_smile:

Posted at Nginx Forum:

With a backend (like php) you are always bound to what the backend can
handle, nginx is just a portal here.
The amount of backends should be balanced with the best balance setting
like
leastconn/iphash, ea: Using nginx as HTTP load balancer
and
also consider Lua for managing/offloading backends.
So its not really a number game but a distribution one.

Posted at Nginx Forum:

That php issue should be solved for awhile now, also deploy proper
php.ini
settings for each domain.
ea:
[PATH=s:/webroot/domain.nl]
open_basedir = s:/webroot/domain.nl
doc_root = s:/webroot/domain.nl
error_reporting = E_ALL & ~E_NOTICE
error_log = s:/logging/php/domain.nl.errors.log
upload_tmp_dir = s:/webroot/domain.nl/uploads
session.save_path = s:/webroot/domain.nl/sessions
upload_max_filesize = 32M
post_max_size = 8M
disable_functions =
“curl_exec,curl_multi_exec,dl,exec,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,system”

That if: use map and an if.

As for storage, here we use a Debian VM as storage concentrator, nginx
talks
to Debian on IP level and Debian manages/caches all kinds of storage
units
as one pool.
(mapping a drive is slow, use direct ip access)

You might also benefit from speed when using separated lan connections.

Posted at Nginx Forum:

itpp2012 with the PHP multi run you supply with your builds.

start /min multi_runcgi.cmd 9000
start /min multi_runcgi.cmd 9001
start /min multi_runcgi.cmd 9002
start /min multi_runcgi.cmd 9003
start /min multi_runcgi.cmd 9004
start /min multi_runcgi.cmd 9005
start /min multi_runcgi.cmd 9006
start /min multi_runcgi.cmd 9007
start /min multi_runcgi.cmd 9008
start /min multi_runcgi.cmd 9009
start /min multi_runcgi.cmd 9010
start /min multi_runcgi.cmd 9011
start /min multi_runcgi.cmd 9012
start /min multi_runcgi.cmd 9013
start /min multi_runcgi.cmd 9014
start /min multi_runcgi.cmd 9015
start /min multi_runcgi.cmd 9016
start /min multi_runcgi.cmd 9017
start /min multi_runcgi.cmd 9018
start /min multi_runcgi.cmd 9019
start /min multi_runcgi.cmd 9020

What is the maximum number of php processes we can have ? I even
increased
the system paging file to allow me to run 500 of them with 32GB of ram.
but
if i try 1000 i get allot of memory errors and just a crash basically.

Posted at Nginx Forum:

Yeah i do the same with the IP each nginx process knows the machine to
locate via http://172.0.0.1; each machine is assigned its own localhost
ip.

The only thing that does not use the IP is each servers nginx pulls from
static data from the mapped hard drive Z:/ But taken into consideration
i
run SSD’s and i also use a RAID6 setup with the following LSI Mega RAID.
http://www.lsi.com/products/raid-controllers/pages/megaraid-sas-9271-8i.aspx

And a CacheCade 120Go SSD to cache frequently accessed data.

I also think Nginx open_file_cache feature would help allot too. I dont
get
any timeouts or lag or problems with static data requests.

Posted at Nginx Forum:

Whatever your system can handle, but anywhere between 4 and 20 should be
ok,
using more would only be useful when you make more pools and geoip split
them up.

ea. divide the world into 20 portions and have a pool of 8 for each.

Posted at Nginx Forum:

I suppose i should explain my enviorment odly enough i did a picture a
while
back to explain it too.

Here is the pic
http://hwdmediashare.co.uk/media/kunena/attachments/19987/Untitled_2014-09-19.png

To explain it :
A VRack is a virtual rack all my servers are connected to eachother by a
ethernet cable.

Now The loadbalencer is just a IP that the domain name points to and it
will
randomly redirect to one of the 3 php servers.

The php servers then pull the data they need to process from the Z:/
Drive
what is the storage server. Same with Nginx any static files it needs to
deliever comes from the Z:/ drive.

I was also curious since i use some try_files and fastcgi_split
statements
for security with PHP and Nginx would that be causing PHP more traffic
since
files get passed to PHP first. (Maybe my understanding of that is
wrong.)

location / {

This will allow for SEF URL’s

try_files $uri $uri/ /index.php?$args;
}
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
}
location ~ .php$ {

Zero-day exploit defense.

nginx 0day exploit for nginx + fastcgi PHP

try_files $uri =404;
fastcgi_split_path_info ^(.+.php)(/.+)$;
}

Posted at Nginx Forum:

c0nw0nk Wrote:

I dont think 8 php process can take that much traffic ?

Depends on what php has to do which needs to be tuned towards expected
traffic, a good cache and pre-coding some php in Lua and deliver that
via
co-sockets can do wonders. (you can do this now)

At the moment we’re experimenting loading php dll’s into workers space
with
Lua and handling php non-blocking via co-sockets, its like embedding php
into nginx. (you can do this when we’ve figured it all out)

Posted at Nginx Forum:

I dont think 8 php process can take that much traffic ?

Posted at Nginx Forum:

Thats cool will you be posting that here or on your site looking forward
to
it :).

Posted at Nginx Forum: