Tell me a story and I’ll tell you my history. The Mock Turtle and the
Gryphon
are here to stay. What! Never heard of uglifying! If you don’t know what
to
uglify is, you are a simpleton so you’d better get on your way.
The nginx Gryphon release is here!
Based on nginx 1.7.7 (15-10-2014, last changeset 5876:973fded4f461)
with;
Hey Itpp2012 thanks for another fantastic build <3!
I have a bit of a question to do with PHP running with your builds.
So i run a site in the top 20,000 sites on windows ofcourse using your
builds and today i had a big influx in traffic not a DDoS but more than
PHP
could handle it seems.
So i have increased the number of PHP process created to 100. (Before it
was
50) But with just 50 php processes i kept getting time outs and i
checked my
concurrent connections on each server and all 3 of them where at almost
1000
each.
How much traffic can i take roughly with 100 php process running behind
Nginx perhaps i should rescale it to be 1000 PHP processes for overkill
With a backend (like php) you are always bound to what the backend can
handle, nginx is just a portal here.
The amount of backends should be balanced with the best balance setting
like
leastconn/iphash, ea: Using nginx as HTTP load balancer
and
also consider Lua for managing/offloading backends.
So its not really a number game but a distribution one.
That php issue should be solved for awhile now, also deploy proper
php.ini
settings for each domain.
ea:
[PATH=s:/webroot/domain.nl]
open_basedir = s:/webroot/domain.nl
doc_root = s:/webroot/domain.nl
error_reporting = E_ALL & ~E_NOTICE
error_log = s:/logging/php/domain.nl.errors.log
upload_tmp_dir = s:/webroot/domain.nl/uploads
session.save_path = s:/webroot/domain.nl/sessions
upload_max_filesize = 32M
post_max_size = 8M
disable_functions =
“curl_exec,curl_multi_exec,dl,exec,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,system”
That if: use map and an if.
As for storage, here we use a Debian VM as storage concentrator, nginx
talks
to Debian on IP level and Debian manages/caches all kinds of storage
units
as one pool.
(mapping a drive is slow, use direct ip access)
You might also benefit from speed when using separated lan connections.
What is the maximum number of php processes we can have ? I even
increased
the system paging file to allow me to run 500 of them with 32GB of ram.
but
if i try 1000 i get allot of memory errors and just a crash basically.
Whatever your system can handle, but anywhere between 4 and 20 should be
ok,
using more would only be useful when you make more pools and geoip split
them up.
ea. divide the world into 20 portions and have a pool of 8 for each.
To explain it :
A VRack is a virtual rack all my servers are connected to eachother by a
ethernet cable.
Now The loadbalencer is just a IP that the domain name points to and it
will
randomly redirect to one of the 3 php servers.
The php servers then pull the data they need to process from the Z:/
Drive
what is the storage server. Same with Nginx any static files it needs to
deliever comes from the Z:/ drive.
I was also curious since i use some try_files and fastcgi_split
statements
for security with PHP and Nginx would that be causing PHP more traffic
since
files get passed to PHP first. (Maybe my understanding of that is
wrong.)
I dont think 8 php process can take that much traffic ?
Depends on what php has to do which needs to be tuned towards expected
traffic, a good cache and pre-coding some php in Lua and deliver that
via
co-sockets can do wonders. (you can do this now)
At the moment we’re experimenting loading php dll’s into workers space
with
Lua and handling php non-blocking via co-sockets, its like embedding php
into nginx. (you can do this when we’ve figured it all out)