Windows nginx Gryphon

22:55 15-10-2014 nginx Gryphon

Tell me a story and I’ll tell you my history. The Mock Turtle and the
are here to stay. What! Never heard of uglifying! If you don’t know what
uglify is, you are a simpleton so you’d better get on your way.
The nginx Gryphon release is here!

Based on nginx 1.7.7 (15-10-2014, last changeset 5876:973fded4f461)

  • Openssl-1.0.1j (CVE-2014-3513, CVE-2014-3567, SSL 3.0 Fallback
  • lua-nginx-module v0.9.13rc1 (upgraded 15-10-2014)
  • Source changes back ported
  • Source changes add-on’s back ported
  • Changes for nginx_basic: Source changes back ported
  • Scheduled release: no (openssl fixes)
  • Additional specifications: see ‘Feature list’

Builds can be found here:

Follow releases

Posted at Nginx Forum:

Hey Itpp2012 thanks for another fantastic build <3! :smiley:

I have a bit of a question to do with PHP running with your builds.

So i run a site in the top 20,000 sites on windows ofcourse using your
builds and today i had a big influx in traffic not a DDoS but more than
could handle it seems.

So i have increased the number of PHP process created to 100. (Before it
50) But with just 50 php processes i kept getting time outs and i
checked my
concurrent connections on each server and all 3 of them where at almost

How much traffic can i take roughly with 100 php process running behind
Nginx perhaps i should rescale it to be 1000 PHP processes for overkill


Posted at Nginx Forum:

With a backend (like php) you are always bound to what the backend can
handle, nginx is just a portal here.
The amount of backends should be balanced with the best balance setting
leastconn/iphash, ea: Using nginx as HTTP load balancer
also consider Lua for managing/offloading backends.
So its not really a number game but a distribution one.

Posted at Nginx Forum:

That php issue should be solved for awhile now, also deploy proper
settings for each domain.
open_basedir = s:/webroot/
doc_root = s:/webroot/
error_reporting = E_ALL & ~E_NOTICE
error_log = s:/logging/php/
upload_tmp_dir = s:/webroot/
session.save_path = s:/webroot/
upload_max_filesize = 32M
post_max_size = 8M
disable_functions =

That if: use map and an if.

As for storage, here we use a Debian VM as storage concentrator, nginx
to Debian on IP level and Debian manages/caches all kinds of storage
as one pool.
(mapping a drive is slow, use direct ip access)

You might also benefit from speed when using separated lan connections.

Posted at Nginx Forum:

itpp2012 with the PHP multi run you supply with your builds.

start /min multi_runcgi.cmd 9000
start /min multi_runcgi.cmd 9001
start /min multi_runcgi.cmd 9002
start /min multi_runcgi.cmd 9003
start /min multi_runcgi.cmd 9004
start /min multi_runcgi.cmd 9005
start /min multi_runcgi.cmd 9006
start /min multi_runcgi.cmd 9007
start /min multi_runcgi.cmd 9008
start /min multi_runcgi.cmd 9009
start /min multi_runcgi.cmd 9010
start /min multi_runcgi.cmd 9011
start /min multi_runcgi.cmd 9012
start /min multi_runcgi.cmd 9013
start /min multi_runcgi.cmd 9014
start /min multi_runcgi.cmd 9015
start /min multi_runcgi.cmd 9016
start /min multi_runcgi.cmd 9017
start /min multi_runcgi.cmd 9018
start /min multi_runcgi.cmd 9019
start /min multi_runcgi.cmd 9020

What is the maximum number of php processes we can have ? I even
the system paging file to allow me to run 500 of them with 32GB of ram.
if i try 1000 i get allot of memory errors and just a crash basically.

Posted at Nginx Forum:

Yeah i do the same with the IP each nginx process knows the machine to
locate via; each machine is assigned its own localhost

The only thing that does not use the IP is each servers nginx pulls from
static data from the mapped hard drive Z:/ But taken into consideration
run SSD’s and i also use a RAID6 setup with the following LSI Mega RAID.

And a CacheCade 120Go SSD to cache frequently accessed data.

I also think Nginx open_file_cache feature would help allot too. I dont
any timeouts or lag or problems with static data requests.

Posted at Nginx Forum:

Whatever your system can handle, but anywhere between 4 and 20 should be
using more would only be useful when you make more pools and geoip split
them up.

ea. divide the world into 20 portions and have a pool of 8 for each.

Posted at Nginx Forum:

I suppose i should explain my enviorment odly enough i did a picture a
back to explain it too.

Here is the pic

To explain it :
A VRack is a virtual rack all my servers are connected to eachother by a
ethernet cable.

Now The loadbalencer is just a IP that the domain name points to and it
randomly redirect to one of the 3 php servers.

The php servers then pull the data they need to process from the Z:/
what is the storage server. Same with Nginx any static files it needs to
deliever comes from the Z:/ drive.

I was also curious since i use some try_files and fastcgi_split
for security with PHP and Nginx would that be causing PHP more traffic
files get passed to PHP first. (Maybe my understanding of that is

location / {

This will allow for SEF URL’s

try_files $uri $uri/ /index.php?$args;
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
location ~ .php$ {

Zero-day exploit defense.

nginx 0day exploit for nginx + fastcgi PHP

try_files $uri =404;
fastcgi_split_path_info ^(.+.php)(/.+)$;

Posted at Nginx Forum:

c0nw0nk Wrote:

I dont think 8 php process can take that much traffic ?

Depends on what php has to do which needs to be tuned towards expected
traffic, a good cache and pre-coding some php in Lua and deliver that
co-sockets can do wonders. (you can do this now)

At the moment we’re experimenting loading php dll’s into workers space
Lua and handling php non-blocking via co-sockets, its like embedding php
into nginx. (you can do this when we’ve figured it all out)

Posted at Nginx Forum:

I dont think 8 php process can take that much traffic ?

Posted at Nginx Forum:

Thats cool will you be posting that here or on your site looking forward
it :).

Posted at Nginx Forum: