Nginx - PHP FPM Server load is very high


The server is struggling to handle the traffic.
I have 8GB ram. Quad core server.

I have changed the config file for nginx and i have default config for
Please advice the best config.

Right now, the load is about 50

user nginx;
worker_processes 4;

error_log /var/log/nginx/error.log warn;
pid /var/run/;

events {
worker_connections 1024;

client_header_timeout 3m;
client_max_body_size 4M;
client_body_timeout 3m;
send_timeout 3m;

client_header_buffer_size    1k;
large_client_header_buffers  4 4k;

gzip on;
gzip_min_length  1100;
gzip_buffers     4 8k;
gzip_types       text/plain;

output_buffers   1 32k;
postpone_output  1460;

sendfile         on;
tcp_nopush       on;
tcp_nodelay      on;
keepalive_timeout 75 20;

Posted at Nginx Forum:

On 6 March 2014 19:18, agriz [email protected] wrote:

The server is struggling to handle the traffic.
I have 8GB ram. Quad core server.
Right now, the load is about 50

I very much doubt your problem is a simple one which can be solved by
tweaking your nginx config. I say this because you have (50/4 == 12.5)
times as much work to do on this server as you have CPU cores to do it
on. It looks very much like you need … more hardware!

You may be able to offload some static file serving from PHP to
nginx/etc via X-Accel-Redirect; you might cache some content using
Nginx’s (or some other) HTTP caching. But you’ll need to really
understand your application in order to do them correctly, and no-one
here can tell you /exactly/ how to implement them for your situation.

If you want to fix this quickly, buy/lease/provision more hardware/VMs
now. If you want to fix it cheaply, you’ll need to spend time
investigating what the PHP is doing that’s taking the time, hence how
you can help it do it more efficiently (X-Accel-Redirect) or not at
all (caching).


— Original message —
From: “agriz” [email protected]
Date: 6 March 2014, 21:18:05

gzip_min_length 1100;

The information you introduced is very little. But in any case to
achieve little system load you should use fastcgi_cache.