My rails app has been growing in LOC, everything was running fine, until
someday (one or two weeks ago) where I pushed an update to my server:
after a random period of time, my ruby processes eat 100% of the cpu,
and the app becomes unresponsive. The problem is that I am unable to
tell which update started giving troubles.
$ netstat -anp shows connections not being properly closed between my
rails process and postgresql database, the rails app certainly is
hanging there.
I have yet been unable to identify the source of the problem even after:
reinstalling on a fresh operating system (debian lenny)
switching from connecting to postgresql through remote tcp to local
unix sockets
updating nginx
updating Rails and other gems
updating plugins, and removing some that are not so useful
moving from Thin instances to Nginx+Passenger
removing suspicious and most recent lines of code that could be the
problem
Everything works fine on my dev machine. On the production server, after
a random amount of time, it suddenly goes crazy. It’s terribly painful
to hunt down and I don’t see any new potential areas to investigate.
Recently I have been seeing a new error message from time to time but
which disappears on the next request:
A copy of XX has been removed from the module tree but is still active!
Could that be related to some memory leak that will eventually lock a
rails process at 100% cpu after some time?
Has anyone had any troubles like this? Does anyone have an idea where
the problem could come from? How to tackle the problem?
As it’s random, I can make modifications then after 6 hours be happy
thinking that it all works, then 10 minutes later it fails…
Benchmarking www.digiprof.fr (be patient)…
Test aborted after 10 failures
: Operation now in progress
Then in quits.
But if I look in my log file on the server, I have 12 requests that
appear from my IP address!!! ab by default should only make 1 request,
is it Rails receiving the request but not sending it back and ab
requesting again 11 more times? But if I check my website with Firefox
or with curl it works perfectly.
My munin graph show that when the rails app goes out of control, there
is a raise in the number of interrupt and context switches.
When you say 100%, do you mean the usage goes up and then bounces
around,
say between 95% and 100%?
Or do you mean it flatlines at exactly 100%, with no bouncing?
The former means an infinite loop that accesses some IO resource, such
as
the wire or the database. You could also have some kind of endless
conversation, where event A (such as an Ajax hit) triggers event B (such
as
a page refresh), which triggers A again.
The latter means you have a simple infinite loop that is busy doing only
Ruby statements, such as “nil while true”.
How are your unit tests doing? Do they cover all this logic, so they
might
show a similar loop or dead spot?
Can you “comment out” entire blocks of your app, such as entire
controller
actions, and then run the app and see if the problem goes away? If it
does,
the problem is in the last action you clobbered, so put it back in and
then
clobber half of it. Keep clobbering until you find the region of coding
doing it.
All generic techniques - no I don’t know the difference here between
Webrick
and Passenger - but they generally can’t hurt!
All generic techniques - no I don’t know the difference here between
Webrick
and Passenger - but they generally can’t hurt!
Total shot in the dark: resource acquisition deadlock? I seem to recall
that Passenger runs a cluter of Mongrels, while I assume you’re only
running one Webrick process at a time…
Excellent investigation and evidence. I hope you continue, and then do
something with it (i.e., a bug report to the appropriate forum).
I will report any progress here.
When you say 100%, do you mean the usage goes up and then bounces
around, say between 95% and 100%?
Or do you mean it flatlines at exactly 100%, with no bouncing?
$ top shows that it’s locked at 100% From time to time anew ruby1.8
process pops in then goes away, but all locked ruby1.8 processes are at
16.7 to 17% in cpu usage and 8.4 to 8.5 memory usage.
How are your unit tests doing? Do they cover all this logic, so they
might show a similar loop or dead spot?
Nearly 99% of the code is covered and they all pass.
Can you “comment out” entire blocks of your app
Yes that’s what I am doing right now. I have commented out 100% of my
models code, unloaded a few not frequently updated plugins, and
commented out all model calls in my controllers, so only empty pages
will get returned.
I’ll see how it works out. The problem is that the last failure happened
after 8 hours of working perfectly, so I can’t tell if it worked or not
unless I wait at least that amount of time if it doesn’t fail before.
Total shot in the dark: resource acquisition deadlock? I seem to recall
that Passenger runs a cluter of Mongrels
I switched from Thin, to pure mongrel to Passenger, and they all failed
pretty much the same way, but I’ll keep that in mind just in case, and
try with one Webrick running alone.
Thank you all for your assistance, I need new eyes on this problem as I
am sure I’m not looking in the correct place.
Commenting out almost all my application yielded a 10hr run without any
problem, so I added in the latest pieces of code I thought the bug(s)
would be in, and still I got 10hr continuous run without any problem!
Now I am adding back the code little by little, I have no idea where the
bug is, but I’m sure I’ll be very surprised once I nail it! The worse
thing is that I’m probably looking at it right now and I don’t know yet
it’s him.
I’m nailing it down more and more. I have added back most the
functionnality of my website, and it still hasn’t failed, so I
definitely have not a single clue where that bug is hidding! All the
pieces of code I was suspecting have been re-enabled and they haven’t
failed on me.
Passenger uses mongrel? Is that really true? I have no idea but on
the modrails.com site it lists Passenger as being faster than mongrel
while running a couple apps. I’d almost have to say this isn’t true,
if not because of the performance difference, then because it shares
none of the configuration settings of Mongrel. If those files are
sitting around they are cleverly hidden.
On Jul 30, 1:48 am, Marnen Laibow-Koser <rails-mailing-l…@andreas-
Here is the cpuinfo of the server: (edited for brevity)
vendor_id : GenuineIntel
cpu family : 15
model : 6
model name : Intel® Pentium® 4 CPU 3.00GHz
stepping : 5
cpu MHz : 2999.964
cache size : 2048 KB
cpu cores : 1
It’s a P4 with Hyper Thread. The only other machine I have access to is
a PowerPC, and I have never had any problem at all on it, but as I said,
the rails process goes out of control after 2 to 8 hours. Before that
time lapse everything works perfectly fine. I guess there is somewhere a
piece of code that aggregates objects and grows infinitely.
Now 100% of my own written code has been re-enabled and everything has
been stable for the last 3 hours but I still have to wait. For the last
3 days everything has been working since I have been reactivating the
application bit by bit. The only things I have left out are a few
plugins, I’ll gradually let them back in.
For some mysterious reason, I can’t use apache bench to hammer my server
and accelerate the failure, it keeps getting errors although the app is
reachable with FF or curl.
I have reactivated my whole application, except rcov_plugin for which I
had not installed the gem on the production server. I don’t if that’s
the cause, but now my app has been running flawlessly as if nothing ever
happenned…
It finally failed 7 hours and 40 minutes after my last update! I’m
feeling lucky, because it’s a tie break between 2 things: acts_as_list
plugin (which might be clashing with acts_as_list) and my sitemap
generator.
This week long investigation and hunt will finally come to an end. Stay
tuned for tomorrow’s last episode
Nailed! It was acts_as_tree the problem. He has been sentenced to an
unlimited time ban from my app.
Marnen Laibow-Koser wrote:
You probably want awesome_nested_set anyway.
At the time I was reviewing various options, I should have taken
awesome_blabla instead of acts_as_messy_tree. Damn!
I’m not sure I’d have the patience for all the stuff you’ve done!
I have no other choice, as this rails app is my business
So remember, acts_as_tree doesn’t seem to play nicely with other stuff
(acts_as_list?), so be careful. I hope this thread wil lsave other
people days of work.
Nailed! It was acts_as_tree the problem. He has been sentenced to an
unlimited time ban from my app.
That seems surprising as there is so little in acts_as_tree. Unless
it is an interaction with something else that you have. Is there a
possibility you could be setting up a loop in the tree somehow? I
don’t know whether that would give the symptom described. If there is
a flaw in your design that allows a loop to be setup you could get the
same issue with another acts_as…