Hello nginx!
I have one worker-process, which uses over 25GB memory (and doesn’t stop
to
do that).
My configuration is… let’s say special:
So there is nginx, which proxies all requests to the PHP backend and the
PHP
backend sends a large request back to nginx. I set the fastcgi_buffers
very
enormous huge to avoid nginx creating temporary files on my disk - which
would result in high CPU load.
Here is my configuration: (reduced to the problem)
worker_processes 1;
worker_rlimit_nofile 80000;
worker_priority -20;
events {
worker_connections 10240;
multi_accept on;
}
…
# fastcgi settings
fastcgi_buffers 20480 1k;
fastcgi_connect_timeout 30;
fastcgi_read_timeout 30;
fastcgi_send_timeout 30;
fastcgi_keep_conn on;
upstream php-backend {
server 127.0.0.1:9000;
keepalive 10000;
}
As you can see the buffers are extreme large, to avoid disk buffering.
The
problem is that nginx doesn’t free the buffers. It just eats and eats. I
know it’s my fault and not nginx’ fault. What am I doing wrong?
The response of my php backend could be from 1k to 300mb.
What is the best setting for my situation?
Thanks
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,239795,239795#msg-239795