How do you scale horizontally with Rails 3.2 on EC2?

Hello guys,
currently I’m running a staging environment which should serve the
purpose to replicate the production one as close as possible before
the launch.

To keep the staging environment’s costs down I tried using a series of
m1.small instances, the Rails one hosts a stack composed by Rails 3.2

  • Nginx + Rainbows! + Ruby 1.9.3-p194.

I completed my capistrano deploy scripts and asset precompilation is
happening on the server through Node.js.

Couple considerations:

  • depending on the instance size this will take long time for each
    deploy because of the steal CPU time feature EC2 has.
    The supervisor can and will decide to allocate resources to other
    instances, leaving you with a sort of crippled system.
    I’m reaching a point where just precompiling the assets takes
    450000ms. This without taking into account that since the CPU usage
    skyrockets there is little room for the actual processes on the live
    server.
  • the precompile task by default precompiles the assets twice, digest
    and non-digest. I might just run precompile primary and get away with
    it, but still.
  • I benchmarked the precompile task on other kind of instances, the
    ones that gives best results are high cpu ones and they are not cheap,
    given that we don’t need all that CPU power during the normal
    lifecycle of the app and it seems silly to use this kind of instance
    just to tune down deploy (precompile + startup) time.
  • when the assets are finally precompiled I upload them to Amazon S3/
    CloudFront via the excellent asset_sync gem.
  • I can’t make Rails accept that assets are hosted elsewhere; the
    staging app won’t start or bomb out on the first request if I decide
    to precompile assets on my local machine at some point during
    deployment and upload them to s3 leaving the app assets-free on the
    server. Disabling the asset pipeline won’t help since it will complain
    about missing manifests, etc.
  • Rails is very slow starting up, always due to the CPU steal time and
    some falcon patches still missing from MRI. This is another matter,
    but if you don’t pay attention in your deploy process you might end up
    with people seeing old assets or having pretty large downtimes if you
    don’t have a downtime-free deploy like the one provided using Unicorn/
    Rainbows/Zbatery, etc.

So, how do you scale horizontally with Rails 3.2?
Currently I have enough firepower to substain a large amount of users
given the combination of fine tuning, nginx, load balancers, reverse
proxy, rainbows! and whatnot, but firing up another instance and
managing the assets seems to become a problem.

Do you precompile locally to have a central assets creation point and
avoid CPU burns and potentially long deploys on the servers?

It might be out of topic but what tool (chef, puppet, etc.) do you use
to clone, start/stop instances given the requirement of deeply
customized config files (like for example NGINX) and managing adding/
removing instances to an external load balancer like the AWS one?

I’m starting to have doubts in that EC2 might not be the friendlier
cloud hosting platform for a large Rails 3.2 app.

To reach my assets I’m using this:

I’m using something like this:
config.action_controller.asset_host = Proc.new do |source, request =
nil, *_|
if request && request.ssl?
“https://#{Settings.services.amazon.cloudfront_distribution}”
else
“http://#{Settings.services.amazon.cloudfront_distribution}”
end
end

If you have any tips on a asset-less deploy configuration (digested
possibly) with real assets hosted elsewhere I’m all ears.

Thanks.

On May 2, 1:55am, Claudio P. [email protected] wrote:

It might be out of topic but what tool (chef, puppet, etc.) do you use
to clone, start/stop instances given the requirement of deeply
customized config files (like for example NGINX) and managing adding/
removing instances to an external load balancer like the AWS one?

The precompiling stuff twice is just dumb, but i haven’t found asset
recompilation to be nearly as slow as you (~100s on a c1.medium), and
besides it happens while the app is still up).

As far as adding instances go, first off I create an ami for my app
server which has all the infrequently changing stuff on it (ruby,
apache, c libraries etc.)
The application is deployed to /var/www/dressipi, which is a separate
EBS volume. Immediately after a deploy, I create a new snapshot of
that volume.

Adding a app server instance is then a question of creating a new
instance from my ami, and asking amazon to create a new volume from
that snapshot and mount it at /var/www/dressipi. Since I use
autoscaling, i can in fact just increase the number of desired
instances and it will create the server in that manner and adds it to
the load balancer automatically ( if you use ELB then you get that
level of integration). Before we did that I used to use fog to get the
list of servers with the appropriate tags and then generated an
haproxy config file from that.

Fred

Hello Fred,
gaining advantage of EBS indeed is the ideal solution.

Cheers