[FATAL] failed to allocate memory

I’ve been having a heck of a time keeping my Typo FCGI sockets up and
running. I haven’t been able to find much help in the log files other
then
quite a few “connection refused” and “backend died” error messages.
Just a
moment ago I was logged into the server while browsing my blog and was
given
the following error: [FATAL] failed to allocate memory. I haven’t seen
this
on any of the other Rails apps that I’ve managed deployment on, thus I’m
really curious as to if this is a Typo specific error or not. My
environment is as follows:

Apache -> Lighttpd -> FCGI Sockets (2)
Typo 4.0.2
Ruby 1.8.4
TextDrive Shared Hosting

Any pointers in the right direction would be appreciated. I’d love to
be
able to turn off my FCGI restart scripts.

Thanks.

Josh

You’re hitting the resource limits on textdrive. How many
fcgi listeners are you running?

I’ve got 2 running (fairly) stably after switching off as much as
possible - see

http://www.stevelongdo.com/articles/2006/08/04/typo-4-0-and-memory-reduction

On 8/15/06, Dick D. [email protected] wrote:

You’re hitting the resource limits on textdrive. How many
fcgi listeners are you running?

I thought it might be something like this but was given no log message
indicating that my process was being killed. Thanks for the link and
the
shove in the right direction!

Josh

On 8/15/06, Josh K. [email protected] wrote:

I thought it might be something like this but was given no log message
indicating that my process was being killed. Thanks for the link and the
shove in the right direction!

So even with the removal of the majority of the components my two
processes
are still hovering around 42-48meg, is this normal?

Josh

On 8/15/06, Josh K. [email protected] wrote:

On 8/15/06, Josh K. [email protected] wrote:

I thought it might be something like this but was given no log message
indicating that my process was being killed. Thanks for the link and the
shove in the right direction!

So even with the removal of the majority of the components my two processes
are still hovering around 42-48meg, is this normal?

Josh

That’s still higher then I’d like to see, but it’s within the range
that people have reported.

Scott

On 8/15/06, Scott L. [email protected] wrote:

I’m not focusing on doing a lot of speed or memory improvements with
4.0 right now. I’m going to release 4.0.3 with a couple more bug
fixes soon, but after that I’m going to start concentrating on Typo
4.1. One of the big goals for 4.1 is performance; if I find anything
big and obvious, then I’ll back-port the change to 4.0, but I don’t
want to experiment with 4.0–that’s what 4.1 is for.

Scott your work on this has been great, please let me know where you
need
help fixing bugs, writing code/docs or running tests.

Josh

“Scott L.” [email protected] writes:

Josh
4.0 right now. I’m going to release 4.0.3 with a couple more bug
fixes soon, but after that I’m going to start concentrating on Typo
4.1. One of the big goals for 4.1 is performance; if I find anything
big and obvious, then I’ll back-port the change to 4.0, but I don’t
want to experiment with 4.0–that’s what 4.1 is for.

I wonder how much affect the new regime of including more things has
had on memory usage? In theory we could be far more particular about
when we fetch stuff, but then we end up paying with more load on the
database server. It’s all about the trade offs I’m afraid.

On 8/15/06, Scott L. [email protected] wrote:

That’s still higher then I’d like to see, but it’s within the range
that people have reported.

…and having said that, I just checked mine, and I’m seeing 56-62 MB
after two days. And, more annoying, it’s racked up about 9 hours of
CPU time along the way. Admittedly, this is a slow box (Athlon 700
Mhz), and my blog is fairly busy.

I’m not focusing on doing a lot of speed or memory improvements with
4.0 right now. I’m going to release 4.0.3 with a couple more bug
fixes soon, but after that I’m going to start concentrating on Typo
4.1. One of the big goals for 4.1 is performance; if I find anything
big and obvious, then I’ll back-port the change to 4.0, but I don’t
want to experiment with 4.0–that’s what 4.1 is for.

Scott

Scott

One thing I’d recommend (if you aren’t doing this already) is to build
a memory profiler component with an action that dumps your memory
profile data. Then you can run zillions of queries without paying the
price of the memory profiler per hit, while still having your data
always be accessible.

If one of these was easily available, then I wouldn’t have to write my
own when I start working on memory leaks. Hint, hint.

Scott

This is very similar to what I use. Mostly because doing anything
better
becomes an exercise in a C extension to hook into the Ruby interpreter.

Seems like a C heap walker would be smart enough to use for this
purpose.
Maybe when Apple adds RoR they will make XCode able to profile Ruby on
OS
X?

On TextDrive the profiling code has enough overhead it kills the thread
sometimes. I have noticed that the number of Blog objects seems to
stack up
over time though. I’ve seen as many 22 instantiated at the same time.
Considering that I only have one Blog that seems kind of high.

Granted they don’t take up much memory themselves, but I wonder if they
hold
on to arrays of Content objects and prevent them from being garbage
collected. I am still digging into it.

On 8/15/06, Scott L. [email protected] wrote:

One thing I’d recommend (if you aren’t doing this already) is to build
a memory profiler component with an action that dumps your memory
profile data. Then you can run zillions of queries without paying the
price of the memory profiler per hit, while still having your data
always be accessible.

If one of these was easily available, then I wouldn’t have to write my
own when I start working on memory leaks. Hint, hint.

Heh. You can get away with something simpler. Do what I did:

  1. Add the profile hook as an after filter on the main article
    controller.
  2. Add a class-level variable (@@next_time_to_run).
  3. Set an interval at which to run the profile dump.

Links to profiling code snippet, etc., are here:

http://tinyurl.com/n42nf

On 8/15/06, Steve L. [email protected] wrote:

Seems like a C heap walker would be smart enough to use for this purpose.
Maybe when Apple adds RoR they will make XCode able to profile Ruby on OS X?

DTrace is coming with Leopard. <>

On 8/15/06, Paul B. [email protected] wrote:

Heh. You can get away with something simpler. Do what I did:

  1. Add the profile hook as an after filter on the main article controller.
  2. Add a class-level variable (@@next_time_to_run).
  3. Set an interval at which to run the profile dump.

Links to profiling code snippet, etc., are here:

http://tinyurl.com/n42nf

Cool. Personally, I’d rather be able to trigger it on-demand, but the
two approaches are only a couple minutes apart from each other :-).

Scott

On 15/08/06, Scott L. [email protected] wrote:

I kind of doubt that dtrace will be very useful in debugging memory
use inside of Ruby code–the interpreter will probably swizzle things
enough to screw up dtrace. OTOH, it’ll probably be nice for I/O
work…

Dtrace has high level probe providers as well as the kernel level stuff

it’s not like strace or truss, you get to monitor activity from
top-level right
down to the internal kernel methods.

http://blogs.sun.com/roller/page/bmc?entry=dtrace_on_rails

Haven’t run typo on opensolaris yet - has anyone else got it working?

On 8/15/06, Paul B. [email protected] wrote:

On 8/15/06, Steve L. [email protected] wrote:

Seems like a C heap walker would be smart enough to use for this purpose.
Maybe when Apple adds RoR they will make XCode able to profile Ruby on OS X?

DTrace is coming with Leopard. <>

I kind of doubt that dtrace will be very useful in debugging memory
use inside of Ruby code–the interpreter will probably swizzle things
enough to screw up dtrace. OTOH, it’ll probably be nice for I/O
work…

Scott

“Steve L.” [email protected] writes:

On TextDrive the profiling code has enough overhead it kills the thread
sometimes. I have noticed that the number of Blog objects seems to stack up
over time though. I’ve seen as many 22 instantiated at the same time.
Considering that I only have one Blog that seems kind of high.

Every so often I think “I really should rewrite ActiveRecord so that
there’s only one instance of given object in memory at any one time.”

And then I remember what a pain in the arse it can be to sort that
out.

Granted they don’t take up much memory themselves, but I wonder if they hold
on to arrays of Content objects and prevent them from being garbage
collected. I am still digging into it.

If they’re not disappearing it’s because something permanent is
holding a link to them, which is unlikely to be a content
object. Presumably it’s possible to write something to do an
objectspace walk and find all the objects linking to your multiple
blog instances.

On 8/15/06, Piers C. [email protected] wrote:

And then I remember what a pain in the arse it can be to sort that
out.

If you look at our SQL traces, you’ll see that we can create a huge
number of Blog objects per-hit. So it’s possible that 22 isn’t really
a leak. I suspect that I’ve fixed this in my 4.1 tree, but I haven’t
tested that part yet. I’m still busy deprecating old helpers.

Scott