Memory use flares up sharply, how to troubleshoot?

Hi,

Several times recently, we have seen our production nginx memory usage
flare
up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to
an
hour or so, and then recover. There is not a spike in number of
connections,
just memory use, so whatever causes this, it does not seem to be an
increase
in concurrency.

The obvious thing to suspect for this is our app’s newest change, which
involves streaming responses proxied from an upstream (via uwsgi_pass);
these responses can get very large and run for many minutes, pumping
hundreds of megabytes each. But I expect nginx memory use for these
requests
to be bounded by uwsgi_buffers (shouldn’t it be?) – and indeed I cannot
reproduce the problem by making such requests, even copying the exact
ones
that are being made when the memory spike occurs. In my tests, the
responses
get buffered as they should be, and delivered normally, without memory
growth.

So, what is a good way to investigate what causes all this memory to be
suddenly allocated? Is there a way of introspecting/logging nginx memory
allocation edge cases like this? (Is there documentation on this which I
didn’t find?)

During the memory spike (and before and after), connections are around
270,
of which around 20-25 are writing and the rest are waiting.

Our nginx config has:

worker_processes 2;
events {
worker_connections 1024;
}
http {
uwsgi_buffers 64 8k;
proxy_buffers 64 8k;
}

All other *_buffers settings are at their defaults, on a 64-bit machine.

Accounting for the above buffers, other buffer defaults, and some
uwsgi_cache and proxy_cache zones, and estimating on the high side (e.g.
all
buffer budgets fully used for each active connection) for 270
connections,
here is my rough sketch of the memory uses I am aware of, probably with
some
misunderstandings along the way:

22 * keys_zone=10m for uwsgi_cache_path + proxy_cache_path = 220 MB
uwsgi_buffers: 64 * 8kB * 270 = 135 MB
proxy_buffers: 64 * 8kB * 270 = 135 MB
gzip_buffers: 16 * 8kB * 270 = 34 MB
output_buffers: 1 * 32kB * 270 = 8.6 MB
large_client_header_buffers: 4 * 8kB * 270 = 8.6 MB
ssl_buffer_size: 1 * 16k * 270 = 4.3 MB
client_body_buffer_size: 1 * 16kB * 270 = 4.3 MB
client_header_buffer_size: 1 * 1kB * 270 = 0.3 MB
access_log buffers: 24 * 8kB + 1 * 32k = 0.2 MB

This comes out to 546 MB tops … what big memory use cases are missing
in
this picture?

(And are these buffer size upper-bound estimates significantly
incorrect?)

This is in Nginx 1.6.0 on Linux 3.2.57 64-bit … specifically:

$ nginx -V
nginx version: nginx/1.6.0
built by gcc 4.7.2 (Debian 4.7.2-5)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
–conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log
–http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
–lock-path=/var/run/nginx.lock
–http-client-body-temp-path=/var/cache/nginx/client_temp
–http-proxy-temp-path=/var/cache/nginx/proxy_temp
–http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
–http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
–http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
–group=nginx
–with-http_ssl_module --with-http_realip_module
–with-http_addition_module
–with-http_sub_module --with-http_dav_module --with-http_flv_module
–with-http_mp4_module --with-http_gunzip_module
–with-http_gzip_static_module --with-http_random_index_module
–with-http_secure_link_module --with-http_stub_status_module
–with-http_auth_request_module --with-mail --with-mail_ssl_module
–with-file-aio --with-http_spdy_module --with-cc-opt=‘-g -O2
-fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security
-Wp,-D_FORTIFY_SOURCE=2’ --with-ld-opt=‘-Wl,-z,relro -Wl,–as-needed’
–with-ipv6

$ uname -a
Linux ren2 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3+deb7u2 x86_64 GNU/Linux

Thanks, best regards,

Gulli

Posted at Nginx Forum:

Hello!

On Mon, Jul 21, 2014 at 11:15:00AM -0400, gthb wrote:

to be bounded by uwsgi_buffers (shouldn’t it be?) – and indeed I cannot
reproduce the problem by making such requests, even copying the exact ones
that are being made when the memory spike occurs. In my tests, the responses
get buffered as they should be, and delivered normally, without memory
growth.

How do you track “nginx memory”?

From what you describe I suspect that disk buffering occurs (see
Module ngx_http_uwsgi_module), and the number you
are looking at includes the size of files on disk.

So, what is a good way to investigate what causes all this memory to be
suddenly allocated? Is there a way of introspecting/logging nginx memory
allocation edge cases like this? (Is there documentation on this which I
didn’t find?)

The debuging log includes full information about all memory
allocations, see A debugging log.


Maxim D.
http://nginx.org/

How do you track “nginx memory”?

What I was tracking was memory use per process name as reported by New
Relic
nrsysmond, which I’m pretty sure is RSS from ps output, summed over all
nginx processes.

From what you describe I suspect that disk buffering occurs (see
Module ngx_http_uwsgi_module), and the number you
are looking at includes the size of files on disk.

I wish : ) because that’s what I want to happen for these large
responses.
But that’s definitely not it, because we see a spike of swap when this
occurs, with most other processes on the machine being paged out … and
in
the worst spikes swap has filled up and an OOM kill has occurred, which
conveniently records in syslog the RSS for an nginx process being
killed:

Jul 21 03:54:16 ren2 kernel: [3929562.712779] uwsgi invoked oom-killer:
gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0

Jul 21 03:54:16 ren2 kernel: [3929562.737340] Out of memory: Kill
process
5248 (nginx) score 328 or sacrifice child
Jul 21 03:54:16 ren2 kernel: [3929562.737352] Killed process 5248
(nginx)
total-vm:3662860kB, anon-rss:3383776kB, file-rss:16kB

So that catches Nginx holding a 3.2GB resident set, matching what New
Relic
says about the same time.

The debuging log includes full information about all memory
allocations, see A debugging log.

Thank you. I haven’t been able to reproduce this outside of production
(or
even in production) so I might have to leave debug logging enabled in
production and hope to catch this next time it happens. Am I right to
assume
that enabling debug is going to weigh quite heavily on production usage,
and
eat up disk very fast? (Traffic peaks at around 100 req/sec and 2
MB/sec.)

Cheers,

Gulli

Posted at Nginx Forum:

Hello!

On Mon, Jul 21, 2014 at 02:18:08PM -0400, gthb wrote:

[…]

The debuging log includes full information about all memory
allocations, see A debugging log.

Thank you. I haven’t been able to reproduce this outside of production (or
even in production) so I might have to leave debug logging enabled in
production and hope to catch this next time it happens. Am I right to assume
that enabling debug is going to weigh quite heavily on production usage, and
eat up disk very fast? (Traffic peaks at around 100 req/sec and 2 MB/sec.)

While debug logging is costly enough to notice, 100 r/s is quite
low and it should be fine to just enable the logging. If in
doubt, you may start with enabling debug only for a fraction of
users using the debug_connection directive, see
Core functionality.

In any case, it may be a good idea to show full configuration as
recommended at Debugging | NGINX -
mostly to make sure you haven’t overlooked something obvious like
“uwsgi_buffer_size 10m” or alike.


Maxim D.
http://nginx.org/

Hello!

On Mon, Jul 21, 2014 at 05:44:45PM -0400, gthb wrote:

upstream: “uwsgi://123.45.67.89:3003”, host: “foo.com
2014/07/21 11:41:18 [alert] 16885#0: worker process 21182 exited on
signal 9

and retrying that request reproduces the problem, nginx growing in size
without bound. (The request never made it to the access log because of the
OOM kill, which is why my previous testing didn’t reproduce it)

[…]

These extra lines never appear in the healthy requests, so I imagine they
point to the problem (but I am not at all familiar with Nginx debug output);
in particular those “write new buf” lines look relevant; they are output
right after ngx_alloc_chain_link is called.

The lines in question are just sending the response to the client
via the response body filter chain.

All the possibly relevant Nginx config:

I don’t see anything obviously wrong in the config, but again - I
strongly recommend you to post full configuration. Or, better
yet, if you are able to reproduce the problem with a reduced
config, post it instead (and corresponding debug log).

[…]

Does this narrow down the problem? Can I provide anything further?

No. Please show the debug log.


Maxim D.
http://nginx.org/

Hi,

I finally reproduced this, with debug logging enabled — I found the
problematic request in the error log preceding the kill signal, saying
it
was being buffered to a temporary file:

2014/07/21 11:39:39 [warn] 21182#0: *32332838 an upstream response 

is
buffered to a temporary file /var/cache/nginx/uwsgi_temp/9/90/0000186909
while reading upstream, client: x.x.x.x, server: foo.com, request: “GET
/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1 HTTP/1.1”,
upstream: “uwsgi://123.45.67.89:3003”, host: “foo.com
2014/07/21 11:41:18 [alert] 16885#0: worker process 21182 exited on
signal 9

and retrying that request reproduces the problem, nginx growing in size
without bound. (The request never made it to the access log because of
the
OOM kill, which is why my previous testing didn’t reproduce it)

So here are debug log segments for single epoll events, during a healthy
streaming request (one that doesn’t trigger this problem), and during
the
problematic request. These requests ought to behave just the same (I’m
not
aware of any behavior difference in our upstream app, invoked via
uwsgi_pass, except that the CSV lines are a little longer in the
problematic
response).

Healthy request:

epoll: fd:43 ev:0005 d:00000000023FF960
*1 http upstream request:

“/api/well-behaved/request.csv?el=fnord!a=1.2:b=3.4.5:c&dates_as_dates=1”
*1 http upstream process upstream
*1 pipe read upstream: 1
*1 readv: 3:4096
*1 pipe recv chain: 44
*1 readv: 3:4096
*1 readv() not ready (11: Resource temporarily unavailable)
*1 pipe recv chain: -2
*1 pipe buf free s:0 t:1 f:0 00000000020EED30, pos 00000000020EED30,
size: 2267 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 000000000220B1C0, pos 000000000220B1C0,
size: 0 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 0000000001FE5900, pos 0000000001FE5900,
size: 0 file: 0, size: 0
*1 pipe length: -1
*1 pipe write downstream: 1
*1 pipe write busy: 0
*1 pipe write: out:0000000000000000, f:0
*1 pipe read upstream: 0
*1 pipe buf free s:0 t:1 f:0 00000000020EED30, pos 00000000020EED30,
size: 2267 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 000000000220B1C0, pos 000000000220B1C0,
size: 0 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 0000000001FE5900, pos 0000000001FE5900,
size: 0 file: 0, size: 0
*1 pipe length: -1
*1 event timer: 43, old: 1405973335524, new: 1405973335560
*1 http upstream request:
“/api/well-behaved/request.csv?el=fnord!a=1.2:b=3.4.5:c&dates_as_dates=1”
*1 http upstream dummy handler
timer delta: 0
posted events 0000000000000000
worker cycle
accept mutex lock failed: 0
epoll timer: 500

For the problematic request, the epoll events often look identical
(all
the same kinds of lines, plus sometimes an extra pipe recv chain and
readv
line pair, presumably just because of data being streamed in slightly
bigger
chunks) … but sometimes they have some extra lines, which I’ve
highlighted
with an XXX prefix here:

epoll: fd:42 ev:0005 d:000000000150DE70
*1 http upstream request:

“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
*1 http upstream process upstream
*1 pipe read upstream: 1
*1 readv: 3:4096
*1 pipe recv chain: 519
XXX *1 input buf #135
*1 readv: 2:4096
*1 readv() not ready (11: Resource temporarily unavailable)
*1 pipe recv chain: -2
XXX *1 pipe buf in s:1 t:1 f:0 0000000001417550, pos 0000000001417550,
size: 8192 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 0000000001447610, pos 0000000001447610,
size: 326 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 00000000014305C0, pos 00000000014305C0,
size: 0 file: 0, size: 0
*1 pipe length: -1
*1 pipe write downstream: 1
*1 pipe write busy: 0
XXX *1 pipe write buf ls:1 0000000001417550 8192
XXX *1 pipe write: out:0000000001431878, f:0
XXX *1 http output filter
“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
XXX *1 http copy filter:
“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
XXX *1 http postpone filter
“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
0000000001431878
XXX *1 http chunk: 8192
XXX *1 write new buf t:1 f:0 0000000001431888, pos 0000000001431888,
size: 6
file: 0, size: 0
XXX *1 write new buf t:1 f:0 0000000001417550, pos 0000000001417550,
size:
8192 file: 0, size: 0
XXX *1 write new buf t:0 f:0 00000000014317E0, pos 00000000004ADFFD,
size: 2
file: 0, size: 0
XXX *1 http write filter: l:0 f:1 s:8200
XXX *1 http write filter limit 0
XXX *1 writev: 8200
XXX *1 http write filter 0000000000000000
XXX *1 http copy filter: 0
“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
XXX *1 pipe write busy: 0
*1 pipe write: out:0000000000000000, f:0
*1 pipe read upstream: 0
*1 pipe buf free s:0 t:1 f:0 0000000001447610, pos 0000000001447610,
size: 326 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 0000000001417550, pos 0000000001417550,
size: 0 file: 0, size: 0
*1 pipe buf free s:0 t:1 f:0 00000000014305C0, pos 00000000014305C0,
size: 0 file: 0, size: 0
*1 pipe length: -1
*1 event timer: 42, old: 1405971008168, new: 1405971008452
*1 http upstream request:
“/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1”
*1 http upstream dummy handler
timer delta: 1
posted events 0000000000000000
worker cycle
accept mutex lock failed: 0
epoll timer: 500

These extra lines never appear in the healthy requests, so I imagine
they
point to the problem (but I am not at all familiar with Nginx debug
output);
in particular those “write new buf” lines look relevant; they are output
right after ngx_alloc_chain_link is called.

All the possibly relevant Nginx config:

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '...';
    access_log  /var/log/nginx/access.log  main buffer=32k;
    sendfile        on;
    keepalive_timeout  65;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;
    gzip  on;
    gzip_min_length 1280;
    gzip_types text/css application/json application/javascript 

text/csv
text/xml text/turtle;
log_format combined_upstream ‘…’;
log_format internal_proxy ‘…’;
uwsgi_hide_header X-…;
uwsgi_buffers 64 8k;
proxy_buffers 64 8k;
include /etc/nginx/sites-enabled/*;
}

upstream fooapp.foo.com {
    server 123.45.67.89:3003 down;
    server 123.45.67.88:3003 down;
    server 123.45.67.87:3003;
    server 123.45.67.86:3003;
    least_conn;
}

server {
    listen 80;
    server_name  foo.com;
    listen 443 ssl;
    ssl_certificate wildcard.foo.com.crt;
    ssl_certificate_key wildcard.foo.com.crt.key;
    ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    set_real_ip_from 123.45.67.89/27;
    access_log  /var/log/nginx/foo.com.access.log combined_upstream

buffer=8k flush=2s;
client_max_body_size 32m;

location /api/ {
include uwsgi_params;
uwsgi_pass fooapp-foo.com;
uwsgi_read_timeout 180;
}
}

There are no other occurrences of _buffer anywhere in the config:

$ sudo grep -r _buffer /etc/nginx/
/etc/nginx/nginx.conf:    uwsgi_buffers 64 8k;
/etc/nginx/nginx.conf:    proxy_buffers 64 8k;

Beginning of output of well-behaved request:

HTTP/1.1 200 OK
Server: nginx/1.6.0
Date: Mon, 21 Jul 2014 20:48:50 GMT
Content-Type: text/csv; charset=iso-8859-1
Transfer-Encoding: chunked
Connection: keep-alive
Content-Disposition: attachment; filename=Foo-export.csv
Content-Language: en

Sex,Age group,ZIP code,Year-begin-date,Value
Female,Under 5 years,00601,2012-01-01,572
Female,Under 5 years,00602,2012-01-01,1132
Female,Under 5 years,00603,2012-01-01,1589
Female,Under 5 years,00606,2012-01-01,189
Female,Under 5 years,00610,2012-01-01,784
...

Beginning of output of problematic request:

HTTP/1.1 200 OK
Server: nginx/1.6.0
Date: Mon, 21 Jul 2014 20:49:07 GMT
Content-Type: text/csv; charset=iso-8859-1
Transfer-Encoding: chunked
Connection: keep-alive
Content-Disposition: attachment; filename=Foo-export.csv
Content-Language: en

Location,Measurement,Month-begin-date,Value
SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-01-01,30.6
SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-02-01,35.5
SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-03-01,41.4
SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-04-01,41.6
SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-05-01,44.9
...

Does this narrow down the problem? Can I provide anything further?

Cheers,

Gulli

Posted at Nginx Forum:

Hello!

On Tue, Jul 22, 2014 at 10:51:43AM -0400, gthb wrote:

http {

http://filebin.ca/1UClE4zzhfZe/debug.log.gz

Everything goes OK for a while, just a few stray mallocs, and then maybe
half a minute into the request (the time varies), after maybe 20-25MB have
been transferred, the flood of mallocs starts:

Ok, I see what goes on here. It is a combination of multiple
factors:

  • there are more than 16 buffers and hence stack-based buffer for
    iovs isn’t big enough and nginx have to allocate memory in
    ngx_readv_chain();

  • your backend app returns data in very small chunks, thus there
    are many ngx_readv_chain() calls;

  • response is big, and hence the effect of the above is
    noticeable.

Trivial workaround is to use “uwsgi_buffers 8 64k” instead.
Or you may try the following patch:

HG changeset patch

User Maxim D. [email protected]

Date 1406044801 -14400

Tue Jul 22 20:00:01 2014 +0400

Node ID 129a91bfb0565ab21a0f399688be148fe5e76a1e

Parent 0896d5cb6b3d9ba7d229863ac65cd1559b2c439a

Avoid memory allocations in ngx_readv_chain().

diff --git a/src/os/unix/ngx_readv_chain.c
b/src/os/unix/ngx_readv_chain.c
— a/src/os/unix/ngx_readv_chain.c
+++ b/src/os/unix/ngx_readv_chain.c
@@ -10,7 +10,11 @@
#include <ngx_event.h>

-#define NGX_IOVS 16
+#if (IOV_MAX > 64)
+#define NGX_IOVS 64
+#else
+#define NGX_IOVS IOV_MAX
+#endif

#if (NGX_HAVE_KQUEUE)
@@ -71,7 +75,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx
iov->iov_len += chain->buf->end - chain->buf->last;

     } else {
  •        if (vec.nelts >= IOV_MAX) {
    
  •        if (vec.nelts >= NGX_IOVS) {
               break;
           }
    

@@ -200,7 +204,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx
iov->iov_len += chain->buf->end - chain->buf->last;

     } else {
  •        if (vec.nelts >= IOV_MAX) {
    
  •        if (vec.nelts >= NGX_IOVS) {
               break;
           }
    


Maxim D.
http://nginx.org/

Hi,

here’s a minimal configuration where I can reproduce this:

error_log debug.log debug;

events {
worker_connections 1024;
}

http {
uwsgi_buffers 64 8k;

upstream nginx-test.uwsgi {
    server 10.0.0.7:13003;
    least_conn;
}

server {
    listen 8080;
    server_name  nginx-test.com;

    location /api/ {
        include uwsgi_params;
        uwsgi_pass nginx-test.uwsgi;
    }
}

}

Here’s a debug log covering server start, a single request that exhibits
the
problem, and server shutdown:

http://filebin.ca/1UClE4zzhfZe/debug.log.gz

Everything goes OK for a while, just a few stray mallocs, and then maybe
half a minute into the request (the time varies), after maybe 20-25MB
have
been transferred, the flood of mallocs starts:

$ grep malloc: debug.log | cut -d ’ ’ -f 2 | uniq -c
4 14:34:51
1 14:34:52
3 14:34:56
1 14:34:59
1 14:35:03
2 14:35:12
1216 14:35:27
1135 14:35:28
2144 14:35:29
1996 14:35:30
520 14:35:31

(That last second of mallocs is only smaller because I stopped the
client,
so the request was aborted)

I hope that debug log turns up something informative — and thank you
again
for your time on this.

Cheers,

Gulli

Posted at Nginx Forum:

Hi,

Trivial workaround is to use “uwsgi_buffers 8 64k” instead.
Or you may try the following patch:

Thank you! I tried the uwsgi_buffers workaround in production, and the
patch
in my reproduction setup, and indeed both seem to fix this problem; the
request runs to completion with no memory growth.

  • your backend app returns data in very small chunks, thus there
    are many ngx_readv_chain() calls;

That’s a likely cause of high CPU usage in Nginx, right? It goes to 20%
for
this one request (without debug), the Python app taking the rest. My
intuition was that joining chunks on the Python side would be much more
expensive … but those thousands of ngx_readv_chain() calls per second
are
quite costly too, I take it?

Cheers,

Gulli

Posted at Nginx Forum:

Hello!

On Tue, Jul 22, 2014 at 01:07:58PM -0400, gthb wrote:

  • your backend app returns data in very small chunks, thus there
    are many ngx_readv_chain() calls;

That’s a likely cause of high CPU usage in Nginx, right? It goes to 20% for
this one request (without debug), the Python app taking the rest. My
intuition was that joining chunks on the Python side would be much more
expensive … but those thousands of ngx_readv_chain() calls per second are
quite costly too, I take it?

Syscalls on Python side, small packets over the network (even
local one), and syscalls on nginx side are all costly when
compared to using a reasonably sized buffer on Python side.


Maxim D.
http://nginx.org/