Memory consumption in case of huge number of configs

Hello,

I noticed following behavior of nginx recently, everything I describe
here is related to the case when we have huge number of configuration
files(> 1000). Once nginx is started it occupies more than 100MB of
memory, memory is not freed on fork. That is expected behavior but the
weird thing happens on configuration reload, once HUP signal is sent,
master process doubles occupied memory size, if reload is repeated
memory consumption stays the same, so it looks like memory is not reused
in process of reload, instead new pool is created, which leads to the
waste of memory.

Posted at Nginx Forum:

On Monday 18 June 2012 23:47:21 dparshin wrote:

I noticed following behavior of nginx recently, everything I describe
here is related to the case when we have huge number of configuration
files(> 1000). Once nginx is started it occupies more than 100MB of
memory, memory is not freed on fork. That is expected behavior but the
weird thing happens on configuration reload, once HUP signal is sent,
master process doubles occupied memory size, if reload is repeated
memory consumption stays the same, so it looks like memory is not reused
in process of reload, instead new pool is created, which leads to the
waste of memory.

Of course it creates a new pool. Nginx must continue to work and handle
requests, even if it fails to load the new configuration.

wbr, Valentin V. Bartenev

Valentin V. Bartenev Wrote:

once HUP signal is sent,
continue to work and handle
requests, even if it fails to load the new
configuration.

Sounds reasonable, but unused pool(after reload process successfully
finished) is not destroyed, and after fork the amount of unused memory
is multiplied by the number of workers. I mean that we have two pools in
every worker and in the master process - pool with active configuration
and unused pool used for reload purposes.

Posted at Nginx Forum:

On Tuesday 19 June 2012 19:42:14 dparshin wrote:
[…]

Of course it creates a new pool. Nginx must
continue to work and handle
requests, even if it fails to load the new
configuration.

Sounds reasonable, but unused pool(after reload process successfully
finished) is not destroyed,

Actually it is destroyed. But only after all old workers have finished
servicing their clients.

and after fork the amount of unused memory
is multiplied by the number of workers.

The memory used for configuration pool isn’t multiplied by the number
of workers because of COW.

wbr, Valentin V. Bartenev

Hello!

On Tue, Jun 19, 2012 at 11:42:14AM -0400, dparshin wrote:

expected behavior but the

Of course it creates a new pool. Nginx must
continue to work and handle
requests, even if it fails to load the new
configuration.

Sounds reasonable, but unused pool(after reload process successfully
finished) is not destroyed,

It is destroyed, but you don’t see it as your system allocator
doesn’t return the freed memory to the system.

and after fork the amount of unused memory
is multiplied by the number of workers.

And this isn’t true even for really leaked memory, as fork() uses
copy on write. See Copy-on-write - Wikipedia.

I mean that we have two pools in
every worker and in the master process - pool with active configuration
and unused pool used for reload purposes.

This is not true, see above.

Maxim D.

Hello!

On Tue, Jun 19, 2012 at 10:46:52PM +0400, Valentin V. Bartenev wrote:

Actually it is destroyed. But only after all old workers have finished
servicing their clients.

No, old cycle’s pool is destroyed right after new configuration
reading, at ngx_init_cycle() end.

(Note that this applies to normal mode of operation with master
process. Without master process there are some shims to delay old
cycle’s pool destruction as it might be needed. This is for
debugging only though.)

Maxim D.