Strange memory issue

Like most, I have a test and a production env. Both are CentOS 5, Ruby
1.8.6, Rails 2.0.2, MySQL 5, and Apache. Both were setup using the
same procedure. Except for the hardware (test: Intel, prod: AMD) and
memory (test: 512M, prod: 1G), both systems are essentially the same
right down to the Rails env. Unfortunately, memory usage is very
different for each.

The test system has 2 mongrel instances running. According to top,
they start at around 50M memory, but within a few pages they are
quickly up to 100M. The production systems have 5 mongrels, and they
start at around 50M as well. However, hit them with thousands of page
views and their memory usage rarely passes 60M. It’s almost as if the
GC isn’t running on the test system. There is still plenty of free
memory on the prod system (148M). The test system will usually use it
all up and often begin swapping.

Any thoughts? Any ideas where I might look? I prefer to not even pay
for the 512M VM for the test system, but it doesn’t seem to want to
conserve memory.

are they both run in production mode?

Define production mode? They have different environments, but both
have the same setup.

config/staging.rb and config/production.rb are both:

require ‘syslog_logger’

Settings specified here will take precedence over those in config/

environment.rb

The production environment is meant for finished, “live” apps.

Code is not reloaded between requests

config.cache_classes = true

Use a different logger for distributed setups

config.logger = RAILS_DEFAULT_LOGGER = SyslogLogger.new(‘msc’)
config.logger.level = Logger::INFO

Full error reports are disabled and caching is turned on

config.action_controller.consider_all_requests_local = false
config.action_controller.perform_caching = true

I think what he’s asking is if you are running the development system
under development mode for your Rails application. Isn’t there a host
of debugging and logging stuff going on in the background which could
account for the increase in memory usage.

try them both on the same ‘production’ – if it still has the leak
then there must be some either ‘other’ variable or ‘hidden’ variable
in there somewhere [though I myself have used production and
‘production2’ environment’s style before and it worked fine].
-R

No, I purposely setup the staging and production environments to be
the same. The log level is the same, all debug that I know of is
turned off in both systems, and caching is turned on. My previous
message had the environment file (exactly the same between the two),
and deploy.rb is only different in the ip addresses of the target
machines to install to.

Does Rails look at RAILS_ENV behind the scenes and do something
different? I’m aware of different defaults for logging level, but is
there something else? Trying to find good documentation on this stuff
is not easy (my only real problem with Rails in general). If so, then
I would be concerned since you want a staging environment to be as
close as possible to production as possible. I should be able to get
Rails to run as production when RAILS_ENV != ‘production’. What if I
have two “production” environments (one is a backup)?

Do you have the exact some data set in your db in both locations? A
lot of times this can happen when you start getting more data in your
db and you start pulling back more rows then you think you are, this
can bloat mongrels like you are seeing.

-Ezra

On Apr 30, 2008, at 8:59 PM, gobigdave wrote:

BTW, ruby -v on both systems returns: ruby 1.8.6 (2007-03-13

‘production2’ environment’s style before and it worked fine].

machines to install to.
get

could

require ‘syslog_logger’
config.logger = RAILS_DEFAULT_LOGGER = SyslogLogger.new(‘msc’)

On Wed, Apr 30, 2008 at 8:45 AM, gobigdave

right down to the Rails env. Unfortunately, memory usage is
views and their memory usage rarely passes 60M. It’s almost as
want to
conserve memory.

did you build from scratch both places?
for me on ‘big loads’ the mongrels stay at [sigh] about 120MB each.

Just tried again with an environment of ‘production’, and the same
thing happens – a few clicks and memory grows to about 100M. Since I
will be moving my production environment anyway, I built a brand new
CentOS 5.1 slice (on SliceHost). I followed my original instructions
which involve building Ruby 1.8.6 from source and using yum and gem
for everything else. Within 5-6 page views mongrel_rails goes from
50M to 98M. That does not happen on my current production server
(CentOS 5.0 on dedicated server at ServerBeach) – after several
thousand page views the mongrel processes are still at 55M.

It doesn’t feel like a leak to me because it levels off and stays
pretty steady. I think it’s the GC executing differently or something
to that affect, but I don’t remember doing anything different when I
built the original production server. At this point, I have two
machines running exactly the same application using exactly the same
rails config with VERY different memory usage. The only thing I can
think of is something with build args or libs on ruby because
everything else was done with yum and gem. I am assuming that yum and
gem will install exactly the same things on both machines.

BTW, ruby -v on both systems returns: ruby 1.8.6 (2007-03-13
patchlevel 0) [x86_64-linux]

What do most people see their mongrel_rails processes running at for
memory? Could I be looking at a difference in the way top reports
memory between CentOS 5.0 and 5.1?

Can anyone point me to a way to build ruby with the 32bit libraries on
a 64bit distro? On my 4 servers, that seems to be the difference. The
servers with the 32bit distro level off at < 60M per mongrel. The
64bit servers level off closer to 120M per mongrel. On a server with
less than 2Gig of RAM, 64bit seems like a complete waste to me. Other
than an ego trip, it doesn’t gain you anything because you don’t need
the address space.

you may be able to used a fixed heap chunk size to enable it to
reclaim more of the memory and return it to the OS. [i.e. tweaking the GC to make it better]. Maybe :slight_smile:
-R

On Thu, May 1, 2008 at 3:51 PM, gobigdave [email protected] wrote:

Can anyone point me to a way to build ruby with the 32bit libraries on
a 64bit distro? On my 4 servers, that seems to be the difference. The
servers with the 32bit distro level off at < 60M per mongrel. The
64bit servers level off closer to 120M per mongrel. On a server with
less than 2Gig of RAM, 64bit seems like a complete waste to me. Other
than an ego trip, it doesn’t gain you anything because you don’t need
the address space.
May want to ask slicehost about it.

Exact same databases in both places, and all servers were built from
scratch. After some more digging, I’m think it may be that SliceHost
uses 64bit distros for everything. I’m pretty sure my other two
servers that don’t have this issue are 32bit. If that’s it, it seems
kind of a waste to use 64bit in any of the smaller slices because you
end up bloating all your processes. If you are only running 1Gig of
RAM, there is no need for 64bit.

I’m talking with the SliceHost folks right now. We’ll see what they
have to say. If this turns out to be the issue, then their prices
aren’t as good as they look because you have to get a larger slice to
get the same performance. On my ServerBeach 1Gig machine, I can easily
get 6 mongrels running and almost never swap. On my 512M slice, I can
barely get 2 mongrels running before I start swapping. So, to get the
same performance from a SliceHost server, I would need at least 2Gig
slice, and that is a lot more than a 1Gig server at ServerBeach.

Can anyone point me to a way to build ruby with the 32bit libraries on
a 64bit distro? On my 4 servers, that seems to be the difference. The
servers with the 32bit distro level off at < 60M per mongrel. The
64bit servers level off closer to 120M per mongrel. On a server with
less than 2Gig of RAM, 64bit seems like a complete waste to me. Other
than an ego trip, it doesn’t gain you anything because you don’t need
the address space.

You could also try to reduce memory usage
pennysmalls.com
or what not.
-R