Nginx keep using more and more memory

Hi,

We are using nginx to proxy to 4 back end servers. And we terminate SSL
at nginx. We use nginx in a somewhat special way in that we make 1 HTTP
GET and the back-end servers give a very very long (essentially
never-ending) response.

trouble is that i notice that the nginx worker processes keep using up
more and more memory. When the connections to nginx terminate,
everything seems to be cleaned up and the memory usage of the worker
processes drops to normal.

my ssl config looks like this:

HTTPS server

server {
    server_name  myserver;
    listen       myserver:4443;
    ssl on;

    ssl_certificate      /myserver.pem;
    ssl_certificate_key  /myserver.pem;

    proxy_ssl_session_reuse off;

    ssl_protocols  SSLv3 TLSv1;
    ssl_ciphers  HIGH:!ADH:!MD5;
    ssl_prefer_server_ciphers   on;

    proxy_buffering off;

    location / {
        root   /www/server/html;
        index  index.html index.htm;
    }


    location /SERVER_1/ {
        rewrite /SERVER_1/(.*) /$1 break;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_pass http://localhost:8080;
    }

}

i am running nginx/1.0.11

I can reproduce the problem each time with a test setup of:

1 nginx and a special servlet that only dishes out a constant stream of
serialized java objects. If you are interested, i can provide also this
servlet. then using about 50 long-running HTTP GETs (simulating 100
clients), i can always get more memory usage in nginx.

does anyone know if i misconfigured my nginx?


posted at http://www.serverphorums.com
vic115维多利亚(中国)股份有限公司

nginx use memory pool for each http request.
If the request is never-ending, more and more memory will be allocated.

2012/1/18 [email protected]:

On Tuesday 17 January 2012 22:01:01 [email protected] wrote:

normal.

[…]

What openssl version do you use?

wbr, Valentin V. Bartenev

Hello!

On Tue, Jan 17, 2012 at 07:01:01PM +0100, [email protected]
wrote:

We are using nginx to proxy to 4 back end servers. And we
terminate SSL at nginx. We use nginx in a somewhat special way
in that we make 1 HTTP GET and the back-end servers give a very
very long (essentially never-ending) response.

trouble is that i notice that the nginx worker processes keep
using up more and more memory. When the connections to nginx
terminate, everything seems to be cleaned up and the memory
usage of the worker processes drops to normal.

[…]

i am running nginx/1.0.11

Please try 1.1.x. There is one known problem in 1.0.x which
causes it to allocate about 2 pointers per buffer sent. This is
known to be noticeable in long-running response case.

The problem was fixed in 1.1.4, but unfortunately it’s impossible
to merge the fix into 1.0.x due to required API change.

Maxim D.

Hi,

thanks for the answer.

regarding your tip about proxy_buffers, i dont think its that because
i have disabled proxy_buffering.

I will try out my tests with 1.1 version of nginx and let you know.

regards,

Saqib

Also do you know or can give an estimate of when the 1.1 version will go
stable?

regards,

Saqib

On Wednesday 18 January 2012 16:33:22 Saqib Rasul wrote:

Also do you know or can give an estimate of when the 1.1 version will go
stable?

http://trac.nginx.org/nginx/milestone/1.1.17

FYI, nginx “devel” vs. “stable” difference mainly is about API and
behavior
stability. Both branches are reliable enough to use in production.

wbr, Valentin V. Bartenev

OK, i tired 1.1.13 and the bug seems to have been fixed there. Thanks
again for the help.

Regards,

Saqib

On Wednesday 18 January 2012 13:27:01 Delta Y. wrote:

nginx use memory pool for each http request.
If the request is never-ending, more and more memory will be allocated.

No. It’s not true.

Please, see the following documentation:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers

also:
http://wiki.nginx.org/HttpProxyModule#proxy_max_temp_file_size

wbr, Valentin V. Bartenev