We are using nginx to proxy to 4 back end servers. And we terminate SSL
at nginx. We use nginx in a somewhat special way in that we make 1 HTTP
GET and the back-end servers give a very very long (essentially
never-ending) response.
trouble is that i notice that the nginx worker processes keep using up
more and more memory. When the connections to nginx terminate,
everything seems to be cleaned up and the memory usage of the worker
processes drops to normal.
I can reproduce the problem each time with a test setup of:
1 nginx and a special servlet that only dishes out a constant stream of
serialized java objects. If you are interested, i can provide also this
servlet. then using about 50 long-running HTTP GETs (simulating 100
clients), i can always get more memory usage in nginx.
We are using nginx to proxy to 4 back end servers. And we
terminate SSL at nginx. We use nginx in a somewhat special way
in that we make 1 HTTP GET and the back-end servers give a very
very long (essentially never-ending) response.
trouble is that i notice that the nginx worker processes keep
using up more and more memory. When the connections to nginx
terminate, everything seems to be cleaned up and the memory
usage of the worker processes drops to normal.
[…]
i am running nginx/1.0.11
Please try 1.1.x. There is one known problem in 1.0.x which
causes it to allocate about 2 pointers per buffer sent. This is
known to be noticeable in long-running response case.
The problem was fixed in 1.1.4, but unfortunately it’s impossible
to merge the fix into 1.0.x due to required API change.