Deploying a rails 3.2 app

Hello there,

I have been working in a startup for a few weeks now, and I am
responsible
for setting up the production environment and for “strengthening” the
product (a rails app).
By the way I’ve started reading “Deploying rails” from pragprog.

The deployment of the app will be done through a private beta with about
200 members.

The startup currently owns a VPS where the staging environment runs.
For now, it is planned to install the production environment on the same
server. (I guess it’s not ideal, but is this a real mistake? should we
reconsider those small savings?)

I inherit this configuration from the VPS: a ubuntu install, with MySQL,
Apache/Passenger and Sphinx. Deployment is done via capistrano.
(Do you have any comments about the apache/passenger combo ? How is it
compared to nginx/unicorn ? Should we consider changing?)

Here is my battle plan:

Concerning the server/monitoring part

  • I consider installing the New Relic service and gem to monitor the app
    and the server.
  • I consider using Papertrail to aggregate all the logs (server and
    app).
  • Stay with exception_notification (by mail) or use Airbrake to track
    notifications.

Concerning the app itself

  • I have finished migrating the static files to the assets pipeline.
    (These
    assets are precompiled by capistrano when deploying).
  • I consider migrating the assets to Amazon S3 (I consider specifying
    the
    host-name and using the “asset_sync” gem for the static assets, and use
    the
    “fog” gem for the uploads through carrierwave).
  • Use a task queue for time-consuming tasks (especially sending mails,
    are
    there other task to immediately delay?), with “delayed_job”
  • Use SendGrid to send mails generated by the app

Do you have any remarks, suggestions ?
Are there things bothering you concerning the choices (they are of
course
temporary) of services or gems ?
Have I forgotten important points ?

Thanks a lot

On Jul 5, 6:32pm, Louis D. [email protected] wrote:

The startup currently owns a VPS where the staging environment runs.

  • I consider migrating the assets to Amazon S3 (I consider specifying the
    host-name and using the “asset_sync” gem for the static assets, and use the
    “fog” gem for the uploads through carrierwave).
  • Use a task queue for time-consuming tasks (especially sending mails, are
    there other task to immediately delay?), with “delayed_job”
  • Use SendGrid to send mails generated by the app

Do you have any remarks, suggestions ?
Are there things bothering you concerning the choices (they are of course
temporary) of services or gems ?
Have I forgotten important points ?

So with 200 users, a lot of this stuff just doesn’t matter (e.g. S3
served static assets versus static assets served straight from disk).
The choices you’ve made sound sensible though. Stuff like switching
from passenger to nginx + unicorn isn’t particularly hard.

I have found airbrake to be a little flaky of late - we stopped
getting exception notifications and it took 4-5 days of pestering
their support guys to get it fixed. I’ve heard good things about
bugsnag although I haven’t got around to leaving airbrake yet.

You may wish to consider your disaster recovery plans - if your VPS
should fail how would you replace it. I assume you have backups of the
data (or better a slave continually replicating the master database)
but server stuff is important too: the last thing you want to be doing
after such an incident is spending half a day reinstalling/
reconfiguring apache, rails etc. I would highly recommend automating
how you build server instances. Chef, puppet, sprinkle, homegrown - to
me it doesn’t matter so much as long as you can bring up new instances
easily. You may be in an environment where you can build images that
servers boot off (e.g. EC2 allows you to make AMIs) in which case that
is eventually a good idea too.

You will eventually want to split production from staging as that will
probably eventually bite you, for example you can’t do load testing on
staging without affecting production. A badly written SQL query that
you’re trying out on staging could compromise performance on
production. Stuff like testing a new version of mysql or ruby is
harder too.

A lot of this can probably wait though.

Fred

Hello Louis,

Just to add to Fred’s good advice, specifically the paragraph on
“disaster
recovery plans”…

I realize this might sound like extreme advise but it’s based on
experience
deploying complex systems to various government agencies. I would
suggest
that you need 3 sets of your production hardware. One to run your
production system on, two to serve as your hot-swappable backup (running
in
parallel and mirroring the live server) and three to serve as your
production test bed.

My experience has been that the things you do least often, like build a
new
server, are the things most apt to chomp on your tender bits. It’s
surprising how fast you forget the files you modified to get to the
final
magic moment with apache and passenger, the change and the reasons for
them. Document every step you go through building your first server
then
hand your document off to someone else to build the second. Fold their
comments and changes into your document and have another person build
the
third server.

Also, always have fun.

Rick

Thanks a lot for those great tips !