Memory pool allocation

Suppose, I am allocating a pool of greater than 4k(page size). Say for
example I am calling the function ngx_create_pool with 8096.
But, this function will set the max as 4095 even if it has allocated 8K.
Not
sure, why is it being done like this.

p->max = (size < NGX_MAX_ALLOC_FROM_POOL) ? size :
NGX_MAX_ALLOC_FROM_POOL;

I know, I have created a pool with size 8K, now I am allocating say 4K
(4096) from this pool. I will call ngx_palloc with 4096. There we check
if
(size <= pool->max) which in this case will not satisfy and it will go
and
call ngx_palloc_large which inturn will allocate 4K.

This somehow is not sounding good. Why is ngx_create_pool putting a max
value of page size even when it is allocating more. It is not doing
chaining
also.

Any expert opinions???

Thanks, Santos

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,249161,249161#msg-249161

Any expert opinions???

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,249161,249185#msg-249185

On Wed, Apr 09, 2014 at 08:31:29AM -0400, nginxsantos wrote:

I know, I have created a pool with size 8K, now I am allocating say 4K
Thanks, Santos
Hint: allocations not exceeding pool->max are not freed by ngx_pfree()
until the pool is destroyed.

Thank you.
But, my question is when we are allocating a pool of more than one page
size
why are we putting the max value as one page size and then further
leading
to memory allocation.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,249161,249189#msg-249189

On Thursday 10 April 2014 04:32:42 nginxsantos wrote:

Thank you.
But, my question is when we are allocating a pool of more than one page size
why are we putting the max value as one page size and then further leading
to memory allocation.

Because there are no advantages in allocating big objects from pool’s
memory.

wbr, Valentin V. Bartenev

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs