Memory consumption of ningx

Hi,

I cross-compiled nginx for microblaze processors
(http://github.com/peschuster/nginx) and am currently doing some
performance
benchmarks with nginx running on a microblaze processor with a custom
designed SoC on a FPGA.

However, I am having problems with the memory consumption of nginx:

When I perform 10,000 requests with 20 conn/s and 2 requests/conn (using
httperf - 1), memory used by nginx grows to about 40 MB.
When I repeat this benchmark, the used memory grows from 40 to 80 MB.

The problem with this behavior is that my SoC only has 256 MB of RAM in
total (the file system also runs completely from RAM using a ramdisk).
Therefore nginx crashes the complete system by consuming all memory for
longer/extended benchmark scenarios.

Is this the intended behavior of nginx? Why isn’t it “re-using” the
already
allocated memory?
Any hints on how I can circumvent or track down this problem?

Thanks.

Peter

1: httperf --timeout=5 --client=0/1 --server=192.168.2.125 --port=80
–uri=/index.html --rate=20 --send-buffer=4096 --recv-buffer=16384
–num-conns=5000 --num-calls=2

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,232328,232328#msg-232328

On 10/29/12 21:13, peschuster wrote:

However, I am having problems with the memory consumption of nginx:

When I perform 10,000 requests with 20 conn/s and 2 requests/conn (using
httperf - 1), memory used by nginx grows to about 40 MB.
When I repeat this benchmark, the used memory grows from 40 to 80 MB.

  1. Do you use 3rd party modules?
  2. This request served by nginx (e. g. static files) or proxied to some
    backend?
  3. Memory usage depends on used features: SSL, SSI, gzip, limt rate, geo
    module,
    e. t. c.

If gzip is used for static files, better to pre-compress them, and use
ngx_http_gzip_static_module

Also yo save memory use 1 worker and set reasonable small limit on
connections:

worker_processes 1;

events {
worker_connections 512;
}


Anton Y.

I don’t use any third-party modules, neigther SSL or gzip. nginx was
compiled using the following parameters:

–with-debug
–without-http_rewrite_module
–without-http_gzip_module

nginx only serves one static file. Changing the size of the file (10 KB
to
200 B) has no effect on the memory comsumption of nginx.

Here is my nginx.conf:

http {
include mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local]

“$request” ’

server {
    listen       80;
    server_name  localhost;

    location / {
        root   html;
        index  index.html index.htm;
    }
}

}

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,232328,232368#msg-232368

On Monday 29 October 2012 21:13:01 peschuster wrote:

httperf - 1), memory used by nginx grows to about 40 MB.
When I repeat this benchmark, the used memory grows from 40 to 80 MB.

The problem with this behavior is that my SoC only has 256 MB of RAM in
total (the file system also runs completely from RAM using a ramdisk).
Therefore nginx crashes the complete system by consuming all memory for
longer/extended benchmark scenarios.

Is this the intended behavior of nginx? Why isn’t it “re-using” the already
allocated memory?

Nginx releases allocated memory after it completes each request.

Any hints on how I can circumvent or track down this problem?

It most likely that your system memory allocator do not return freed
memory to
the OS.

wbr, Valentin V. Bartenev


http://nginx.com/support.html
http://nginx.org/en/donation.html

On Wednesday 31 October 2012 15:28:58 peschuster wrote:

VBart Wrote:

It most likely that your system memory allocator do not return freed
memory to the OS.

How can I check this? I suspect this should be part of the OS (Linux)?

Usually it’s a part of glibc in Linux.

Could you give me any keyword to read more about this?

Try man mallopt.

wbr, Valentin V. Bartenev


http://nginx.com/support.html
http://nginx.org/en/donation.html

VBart Wrote:

It most likely that your system memory allocator do not return freed
memory to the OS.

How can I check this? I suspect this should be part of the OS (Linux)?
Could
you give me any keyword to read more about this?

I looked at /proc/meminfo after the first and second batch of requests.
The newly allocated memory is categorized as “Active(anon)”:

Active: 36976 kB
Inactive: 59280 kB
Active(anon): 34376 kB

AnonPages: 34400 kB

vs.

Active: 71092 kB
Inactive: 60088 kB
Active(anon): 68428 kB

AnonPages: 68452 kB

Thanks.
Peter

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,232328,232426#msg-232426

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs