On Wed, Nov 17, 2010 at 4:33 PM, Chieu [email protected] wrote:
To agentzh,
I’ve cc’d the nginx-devel mailing list, BTW.
I had read your lua, echo module.
I think the location.capture of lua module, and echo_location_async of
echo module may cause this problem, too
By design (well, I mean by Igor S.'s design), nginx subrequests do
share the same memory pool with the main request (see the definition
of the ngx_http_subrequest function). So I think an assumption here is
that a main request does not take a lot of subrequests, at least
usually 
If it is your user data in each subrequest that takes up too much
room, you can explicitly free them by making an ngx_pfree call, as
long as those chunks are big enough (nginx’s memory pool will ignore
small chunks and thus save some CPU cycles).
echo “took $echo_timer_elapsed sec for total.”;
}
If there are many “echo_location_async /subX”, echo subrequest will
occupy some memory. Lots of memory will not be free in time.
So,if /main is requested a lot , the system memory will be run out.
Fortunately in almost all of our web apps, “n” in your example is
quite small, usually 2 or 3, and 5 at most 
Am I right? And what’s your opinion?
I think in theory you can explicitly force each subrequest created by
yourself to use a separate memory pool such that when a subrequest
finalizes, it can free up its own pool as soon as possible. But I
haven’t done that myself and it’s very likely some parts of the nginx
core relies on the assumption that subrequest’s memory chunks do have
an identical lifetime as all of its parent requests. I’m not sure. And
I myself, for example, have used this assumption in our ngx_lua module
to capture subrequest response headers for the ngx.location.capture
Lua interface 
Cheers,
-agentzh