Memory Management ( > 25GB memory usage)

Hello nginx!

I have one worker-process, which uses over 25GB memory (and doesn’t stop
to
do that).
My configuration is… let’s say special:

So there is nginx, which proxies all requests to the PHP backend and the
PHP
backend sends a large request back to nginx. I set the fastcgi_buffers
very
enormous huge to avoid nginx creating temporary files on my disk - which
would result in high CPU load.

Here is my configuration: (reduced to the problem)

worker_processes 1;
worker_rlimit_nofile 80000;
worker_priority -20;

events {
worker_connections 10240;
multi_accept on;
}

    # fastcgi settings
    fastcgi_buffers 20480 1k;
    fastcgi_connect_timeout 30;
    fastcgi_read_timeout 30;
    fastcgi_send_timeout 30;
    fastcgi_keep_conn on;
    upstream php-backend {
            server 127.0.0.1:9000;
            keepalive 10000;
    }

As you can see the buffers are extreme large, to avoid disk buffering.
The
problem is that nginx doesn’t free the buffers. It just eats and eats. I
know it’s my fault and not nginx’ fault. What am I doing wrong?

The response of my php backend could be from 1k to 300mb.

What is the best setting for my situation?

Thanks

Posted at Nginx Forum:

Hello,

   Please assign worker-processors according to the number of cpus 

your
server have and reduce or comment the values regarding fastcgi buffers
and
examine the changes after doing it.

Thanks for your reply!

I don’t have performance issues. It’s just the memory usage.
I don’t know how nginx handles its memory, but if I increase the number
of
worker processes wouldn’t this lead into much higher memory usage
(worker_process*25GB) ?

The one worker is able to handle all requests, I don’t see the point of
adding more workers.
I want to know why nginx does not free the buffers after a request is
finished.

Posted at Nginx Forum:

Hello!

On Mon, Jun 03, 2013 at 08:57:21AM -0400, Belly wrote:

    # fastcgi settings
    fastcgi_buffers 20480 1k;

Just a side note: each buffer structure takes about 100 bytes of
memory on 64-bit platforms, and using 1k buffers results in about
10% overhead just because of this.

As you can see the buffers are extreme large, to avoid disk buffering. The
problem is that nginx doesn’t free the buffers. It just eats and eats. I
know it’s my fault and not nginx’ fault. What am I doing wrong?

The response of my php backend could be from 1k to 300mb.

With your settings each connection can allocate up to 20M of
buffers. That is, 1500k connections are enough to allocate 25G of
memory. So the basic question is - how many connections are open?

With pessimistic assumption of 10k connections as per
worker_connections, you configuration will result in more than
200G memory used.

What is the best setting for my situation?

I would recommend using “fastcgi_max_temp_file_size 0;” if you
want to disable disk buffering (see [1]), and configuring some
reasonable number of reasonably sized fastcgi_buffers. I would
recommend starting tuning with something like 32 x 64k buffers.

[1] Module ngx_http_fastcgi_module


Maxim D.
http://nginx.org/en/donation.html

Hello!

On Mon, Jun 03, 2013 at 10:13:03AM -0400, Belly wrote:

I have one worker-process, which uses over 25GB memory (and doesn’t
would result in high CPU load.
}

As you can see the buffers are extreme large, to avoid disk

want to disable disk buffering (see [1]), and configuring some
will do a synchronous job inside an asynchronous one? Will it block the
event loop?

Docs ask linked document the effect as follows:

: Value of zero disables buffering of responses to temporary files.

This is what it actually does - it stops nginx from using disk
buffering. Instead, if fastcgi_buffers configured isn’t enough,
nginx will wait for some buffers to be sent to a client before
reading more data from a backend. Note this means the backend
will be busy sending the response for a longer time.


Maxim D.
http://nginx.org/en/donation.html

Thanks Maxim for you answer!

Maxim D. Wrote:

worker_processes 1;

Just a side note: each buffer structure takes about 100 bytes of
memory on 64-bit platforms, and using 1k buffers results in about
10% overhead just because of this.

Very interesting! - Didn’t know that… Thanks!

As you can see the buffers are extreme large, to avoid disk

1000 - 2000… got your point.

[1] Module ngx_http_fastcgi_module

I read about fastcgi_max_temp_file_size, but I’m a bit afraid of.
fastcgi_max_temp_file_size 0; states that data will be transfered
synchronously. What does it mean exactly? Is it faster/better than disk
buffering? Nginx is built in an asynchronous way. What happens if a
worker
will do a synchronous job inside an asynchronous one? Will it block the
event loop?


Maxim D.
nginx: donation


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

Thanks Maxim and Jonathan for making these things clear to me!
Disabling disk buffering by using fastcgi_max_temp_file_size 0; and
reducing
the number of buffers solved the problem and made my service more
efficient.

Best Regards,
Belly Dancer

Posted at Nginx Forum:

On Jun 3, 2013, at 10:13 AM, Belly wrote:

I read about fastcgi_max_temp_file_size, but I’m a bit afraid of.
fastcgi_max_temp_file_size 0; states that data will be transfered
synchronously. What does it mean exactly? Is it faster/better than disk
buffering? Nginx is built in an asynchronous way. What happens if a worker
will do a synchronous job inside an asynchronous one? Will it block the
event loop?

It’s always been my understanding that in this context, “synchronously”
means that nginx is proxying the data from php/fcgi to the client in
real time.

This sounds like a typical problem of application load balancing.

The disk buffering / temp files allows for nginx to immediately “slurp”
the entire response from the backend process, and then serves the files
to the downstream client. This has the advantage of allowing you to
immediately re-use the fcgi process for dynamic content slow or hangup
connections downstream won’t tie up your pool of fcgi/apache processes.

restated with blocking - the temp files allow for blocking within nginx
instead of php ( nginx can handle 10k connections, php is limited to the
number of processes ). by removing the tempfiles, blocking will happen
within php instead.

my advice would be to use URL partitioning to segment this type of
behavior. I would only allow specific URLs to have no tmp files , and I
would proxy them back to a different pool of fcgi (or apache) servers
running with a tweaked config. this would allow the blocking activity
from the routes serving large files to not affect the “global” pool of
php processes.

i would also look into periodic reloads of nginx, to see if that frees
things up. if so, that might be a simpler/more elegant solution.

I encountered problems like this about 10years ago with mod_perl under
apache. The aggressive code optimizations and memory/process management
were tailored to making the application work very well but did not play
nice with the rest of the box. The fix was to keep a low number of
max_requests , and move to a “vanilla + mod_perl apache” system. Years
later, nginx became the vanilla apache.

similar issues like this happen to people in the python and ruby
communities as well more expensive or intensive routes are often
sectioned off and dispatched to a different pool of servers , so their
workload doesn’t affect the rest of requests.