Testing capistrano tasks

Does anyone have any insight into testing capistrano tasks? More
specifically, I’m looking to add regression tests to this package, which
adds database backup tasks to capistrano:

Scott

Not done it, but Cucumber acceptance tests would surely be a good fit
here:

Given there is a database “foo”
When I run the script
Then there should be a backup no more than 10 minutes old
And the backup should restore OK
And the restored database should contain the same records as the
original database

On 27 Jan 2009, at 03:31, Scott T. wrote:

rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

Matt W.
http://blog.mattwynne.net

On Tue, Jan 27, 2009 at 4:15 AM, Matt W. [email protected] wrote:

Does anyone have any insight into testing capistrano tasks? More
specifically, I’m looking to add regression tests to this package, which
adds database backup tasks to capistrano:
Yes, I have experience testing capistrano. My experience with unit
testing Capistrano has been less than positive. Capistrano is
difficult to test. Basically you have to mock out the shell/run/file
transfer commands to do unit tests. The big problem is how do you know
the shell commands are correct?
There is some benefit though. You do get to see a list of the shell
commands that are run in one place. However, there can be many
permutations of logic inside of a deploy, so its often difficult to
capture every situation.

I realize that this sounds terrible, and perhaps there is a better way
to go about this, but this has been my experience.

IMO, the best way to test Capistrano is to have a staging environment
that simulates your production environment where you deploy to and
make sure things work.
I can’t recall a situation where I had non-obvious issues that would
have been prevented with unit tests. Often times, non-obvious issues
are related to properly restarting processes because the monit + shell
script interactions had issues. This is testable, of course.

You can also do an integration test by deploying to localhost or to a
test box. You can then ensure that the actual db dump exists and
properly restores.

I’m sorry about the rambling infoflood/rant. I hope it is useful. :slight_smile:

Brian

On Tue, Jan 27, 2009 at 12:08 PM, Brian T. [email protected]
wrote:

Yes, I have experience testing capistrano. My experience with unit
testing Capistrano has been less than positive. Capistrano is
difficult to test. Basically you have to mock out the shell/run/file
transfer commands to do unit tests. The big problem is how do you know
the shell commands are correct?

Well, you could test them separately in their own unit tests. Write a
spec, run it, look for expected output.

But the bigger question to me is, unless you’re developing
Capistrano or writing fundamental new tasks for it, why unit test it?
To me that’s a bit like taking apart my car to test the pieces; I find
it more sensible to assume (correctly or not) that they’ve already
been tested at the factory. Integration testing seems a lot more
appropriate for a typical deployment, even a complex one. I don’t
really care what shell commands get run; I care that the end result
puts the expected stuff in the expected locations for the inputs I
intend to give it, and executes the expected commands.

Either way, though, I’d think it would be pretty easy to test in RSpec
or Cucumber. You could even use Webrat to help. Set up a staging
URL, write a stupid little Rails or Sinatra app with a single
controller that spits out different known values before and after
deployment, run your cap recipe inside your spec, and hit the URL to
see if it changed. Add wrinkles as necessary for whatever special
thing you’re testing that’s important to your situation. (If you care
about file contents or locations, your Rails controller could show
grep results or checksums. If you care about DB migrations, write
simple migrations to change DB values and return those. Etc. The
micro-app doesn’t have to be elegant or proper. It’s not the thing
being tested.)

Does that make sense? I haven’t bothered with this sort of thing much
because my deployments are never the weird part of my applications,
but it seems straightforward. One thing we tend to forget if we’re
too MVC-Web-focused is that RSpec can test anything you can express
in Ruby. And you can use Ruby to do anything you can do in the OS.
So it’s a lot more flexible than just hitting models and controllers.


Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

On 27 Jan 2009, at 17:08, Brian T. wrote:

the
testing Capistrano has been less than positive. Capistrano is

IMO, the best way to test Capistrano is to have a staging environment
that simulates your production environment where you deploy to and
make sure things work.
I can’t recall a situation where I had non-obvious issues that would
have been prevented with unit tests. Often times, non-obvious issues
are related to properly restarting processes because the monit + shell
script interactions had issues. This is testable, of course.

That’s why I suggested acceptance tests in Cucumber. I don’t think
mocking / unit testing is going to get you much value here - what you
need is something that feeds back whether the whole thing works. So
yeah you’ll need a sandbox / staging environment for that.

Matt W.
http://blog.mattwynne.net

We actually have a machine that is a perfect clone of the production
machine. The only difference is the passwords. We test all deployments
to it first. We call it staging. Having a staging has 2 benefits:

  1. We can test our deployment scripts, migrations, etc on as close as we
    can get to production.
  2. If the production box dies, we have one that can take it’s place very
    quickly (change the database passwords/pointers and go).

We also have a demo box that is updated via capistrano whenever the
build passes.

Testing configuration / deployment is hard because you can assert that
the config is what you think it is, but that in no way proves that it’s
actually working. It’s like using mocks to build up functionality
against a mock library. At some point you actually have to test against
the real thing or you’re just guessing.

-Mike

On Jan 28, 2009, at 10:01 AM, Mike G. wrote:

build passes.

Testing configuration / deployment is hard because you can assert
that the config is what you think it is, but that in no way proves
that it’s actually working. It’s like using mocks to build up
functionality against a mock library. At some point you actually
have to test against the real thing or you’re just guessing.

Well, it’s actually even trickier than that, because I’m not only
deploying one app to one machine, but am writing a library, which
could be used for any number of apps to any number of machines (which
certain assumptions - i.e. a unix machine with mysql, etc.)

I’ve tested it locally (and by that I mean, on a staging slice), but
when I fix a bug in the library without writing a test, there is
absolutely no regression ability going forward.

Looks like establishing the test harness will the most tricky thing.
As usual, writing the first test is the hardest.

Scott

On Jan 27, 2009, at 12:08 PM, Brian T. wrote:

the
testing Capistrano has been less than positive. Capistrano is

properly restores.

I’m sorry about the rambling infoflood/rant. I hope it is useful. :slight_smile:

It’s is useful - thanks Brian.

Unfortunately the bug involves the listing which comes out of an “ls”
command, and a subsequent rm -rf of an old backup file, so it looks
like I’ll have to do some sort of local deploy strategy.

Thanks for the info,

Scott