Nginx makes mysqld die all the time

Greetings,

I just migrated to nginx + php-fpm from apache a few websites, on a
centos 6.6 virtual server. The sites are up but… now mysqld (MariaDB,
actually) dies every 10/20 minutes with status:

mysqld dead but subsys locked

or

mysqld dead but pid file exists

for reasons not really relevant here I cannot post nginx conf
right away. I will do that in a few hours, when I’m back
at my desk. Since the crashes are so frequent, however, any
help to save time is very welcome. Even if it’s just request
of other specific info, besides the nginx conf files.

TIA,
Marco

Hi,

When I migrated from apache+mod_php to nginx+php-fpm I found I had a few
websites using persistent mysql connections which never closed. I had to
disable this in the php.ini so all the sites fell back to using
non-persistent connections.
I don’t know if this will help as it was mysql not maria or other. I
imagine there’ll probably be something logged somewhere which needs a
bit of time to find.

On 2015-08-18 14:36, Steve W. wrote:

Hi,

When I migrated from apache+mod_php to nginx+php-fpm I found I had a
few websites using persistent mysql connections which never closed.

Steve, thanks for this tip. This surely was part of the problem, but
not all of it.

Sure enough, when I first noticed this problem, I also found in dmesg
messages like this:

Out of memory: kill process 31066 (mysqld) score 30155 or a child
Killed process 31066 (mysqld)

yesterday, as soon as I was able to ssh again, I turned

mysql.allow_persistent = Off in php.ini (it was On)

and restarted everything. Page load time decreased noticeably AND there
where no more mysql crashes for the rest of the day.
This morning, however, I found mysql died again with the same symptom
(dead but subsystem locked) and a DIFFERENT message in dmesg, that I
had never seen before:

Out of memory: kill process 13812 (php-fpm) score 18223 or a child
Killed process 13812 (php-fpm)

the nginx and php-fpm configuration files are pasted below (I have
several virtual
hosts all configured that way for wordpress, plus one drupal and one
semantic scuttle
site, if it matters). What next? Any help is welcome!

Marco

[[email protected] ~]# more /etc/nginx/nginx.conf

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

 server_names_hash_bucket_size  64;
 server_tokens off;
 access_log  /var/log/nginx/access.log combined  buffer=32k;
 log_format    '$remote_addr - $remote_user [$time_local] $status '
                   '"$request" $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for"';

 sendfile        on;
 #tcp_nopush     on;

 #keepalive_timeout  0;
 keepalive_timeout  65;

 #gzip  on;

 # Load config files from the /etc/nginx/conf.d directory
 # The default server is in conf.d/default.conf
 include /etc/nginx/conf.d/*.conf;

}

and this is configuration for one of the wordpress sites, I only changed
the domain name. The configuration is due to the fact that, for several
reasons out of my control, I must run two fully independent
wordpress
installations, but “nested” into each other, that is:

myblog.example.com/ (english blog, by wordpress installed in
$documentroot/myblog)
myblog.example.com/it (italian version, by separate wordpress installed
in $documentroot/myblog_it)

the above worked fine with apache. Can the equivalent config for
nginx be related to the problem I’m seeing? If yes, how, and how to
fix it? And while we are at this: advice on anything else I could
optimize is
also very welcome of course, even if not related to the main problem.

[[email protected] ~]# more /etc/nginx/conf.d/stop.conf

server {
listen 80;
server_name myblog.example.com;
root /var/www/html/wordpress/;
include /etc/nginx/default.d/*.conf;

    # configuration for the italian version, installed
    # in root/myblog_it, but having as url example.com/stop/it

    location ^~ /it/ {
             rewrite ^/it/(.+) /myblog_it/$1 ;
              index             /myblog_it/index.php;
    }

    location /myblog_it/ {
        try_files $uri $uri/ /myblog_it/index.php?args;
index index.php;
location ~ \.php$ {
   fastcgi_pass unix:/tmp/phpfpm.sock;
   fastcgi_index  index.php;
   fastcgi_param  SCRIPT_FILENAME 

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}

##################################################################

main blog

     location ^~ / {
              rewrite ^/(.+) /myblog/$1 ;
              index          /myblog/index.php;
 }

location /myblog/ {
try_files /$uri /$uri/ /myblog/index.php?args;
index index.php;
}

location ~ .php$ {
fastcgi_pass unix:/tmp/phpfpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
$document_root/$fastcgi_script_name;
include fastcgi_params;
}
}

php-fpm configuration:

[root ~]# grep -v ‘^;’ /etc/php-fpm.conf | uniq

include=/etc/php-fpm.d/*.conf

[global]
pid = /var/run/php-fpm/php-fpm.pid

error_log = /var/log/php-fpm/error.log

daemonize = no

emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s

AND ALSO:

[root ~]# grep -v ‘^;’ /etc/php-fpm.d/www.conf | uniq
[www]

listen.allowed_clients = 127.0.0.1

listen = /tmp/phpfpm.sock
listen.owner = nginx
listen.group = nginx
user = nginx
group = nginx

pm = dynamic

pm.max_children = 50

pm.start_servers = 5

pm.min_spare_servers = 5

pm.max_spare_servers = 35

slowlog = /var/log/php-fpm/www-slow.log

php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on

php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session

Hello.

Perhaphs this can help you about the out of memory: OOM Killer:

https://www.google.com/search?client=ubuntu&channel=fs&q=OOM+Kiler&ie=utf-8&oe=utf-8

Kind regards,

Oscar

On 2015-08-19 09:01, oscaretu . wrote:

Hello.

Perhaphs this can help you about the out of memory: OOM Killer:

https://www.google.com/search?client=ubuntu&channel=fs&q=OOM+Kiler&ie=utf-8&oe=utf-8

[6]

Oscar, and list,

I just looked at the several /var/log/messages files. The last one is
from August 16th.
It contains in equal parts lines like:

kernel: php-fpm invoked oom-killer: gfp_mask=0x201da, order=0,
oom_adj=0…

and “kernel: mysqld invoked oom-killer: …” etc etc.

Not really useful. I am (re)reading this right now:

http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html

but so far it doesn’t seem to be really helpful either. I had already
figured out that, for some reason, mysqld and/or php-fpm were consuming
much more memory than would be reasonable. The question is what, in
their, or nginx’s configuration can make that happen?

As it turned out yesterday, a good part of the problem was, indeed, the
persistent mysqld connections in php.ini. Turning them off made the
situation much better, but not fix it. Today, the question is what else,
exactly, to look for in which logs, and above all if something in the
nginx/
php-fpm configuration I posted in my previous message is the trigger of
this behaviour and must be changed/optimized. Ideas, anybody?

Thanks,
Marco

On 2015-08-18 15:23, M. Fioretti wrote:

Greetings,

I just migrated to nginx + php-fpm from apache a few websites, on a
centos 6.6 virtual server. The sites are up but… now mysqld
(MariaDB, actually) dies every 10/20 minutes with status:

Greetings,
after a few days, I can report that setting:

mysql.allow_persistent = Off

in php.ini, and then tuning some php-fpm parameters as below, fixed
the problem. There surely still is much more that can be optimized
(and comments on the parameters below are welcome!) and I’ll ask
about it later, but I haven’t seen any more crashes,
and the websites already load quickly.

Thanks to all who helped!!!

Marco

pm = dynamic

pm.max_children = 12

pm.start_servers = 3

pm.min_spare_servers = 2

pm.max_spare_servers = 3

pm.max_requests = 10

of other specific info, besides the nginx conf files.

TIA,
Marco


nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx


http://mfioretti.com

It looks like your machine is running out of memory, again this is
something I think I’ve dealt with in php-fpm by configuring it to
recycle the child processes so they don’t start consuming too much
memory.

Here’s my fpm pool config file:

[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
pm = dynamic
pm.max_children = 15
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 3
pm.process_idle_timeout = 10s;
pm.max_requests = 10

Do not take this config as-is. I’ve a group of nginx+php-fpm servers
running for wordpress and drupal (2 each) but your activity may be
considerably higher than what I’ve got.

The key parts here are the “pm.” options so you’ll probably want to
investigate each setting and tune to your requirements.

Steve.

Hi,

On 25/08/2015 08:36, M. Fioretti wrote:

mysql.allow_persistent = Off

in php.ini, and then tuning some php-fpm parameters as below, fixed
the problem. There surely still is much more that can be optimized
(and comments on the parameters below are welcome!) and I’ll ask
about it later, but I haven’t seen any more crashes,
and the websites already load quickly.

This is good news, I was actually wondering if you’d sorted it.

Thanks to all who helped!!!

Marco

pm = dynamic
pm.max_children = 12
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 3
pm.max_requests = 10

I don’t think this is the perfect place to get answers on this, however
my understanding is that this is how it’ll work.

php-fpm will manage it’s workers dynamically, starting with 3 and being
able to grow to a maximum of 12. To keep resources to a minimum on your
server it will allow for between 2 and 3 of these to be waiting to do
something. If there’s more than 3 it’ll kill the child, also when each
one has served 10 requests they’ll terminate. If there’s less than 2
idle children it’ll start another until there’s a minimum of 2 idle.

The above may not be the optimal configuration for your needs, as
ondemand may be more suited than dynamic or static. I think key factors
are the spec of the server and how busy php is, there may be some way to
monitor the running number of children to fine tune later but may cause
problems if there’s a sudden surge in requests.

The php manual has some information on each of these settings at
http://php.net/manual/en/install.fpm.configuration.php#pm

Steve.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs