I’m getting into RSpec more after getting the awesome Peepcode
screencast about it. So far I love it, but coming form Test::Unit with
Rails, I have some concerns about the recommendation to mock models and
database calls in controller and view specs.
Lets say you have a method on your User model called
User.find_activated, and you have models specs for it. Then in your
controller specs you mock it up like so:
Everything works great, and specs execute fast due to limited DB access.
What if I then refactor, and decide to rename the find_activated method
to a more descriptive User.find_all_activated ? So I change the model
spec to reflect the new name, then change the method name in the model,
and all specs pass. Awesome.
So I hit my app and it crashes. I get “no method find_activated for
User”. My controller specs are mocking a method that no longer exists
in the real app. As a result, my specs pass, but my app crashes.
This would not have happened without the mocks, as the spec would have
failed since it called a non-existent method.
So, while I get the benefit of mocking (speed and isolation), it seems
like a gateway for passing specs and failing production applications.
What if I then refactor, and decide to rename the find_activated
method to a more descriptive User.find_all_activated ? So I change
the model spec to reflect the new name, then change the method name
in the model, and all specs pass. Awesome.
So I hit my app and it crashes. I get “no method find_activated
for User”. My controller specs are mocking a method that no longer
exists in the real app. As a result, my specs pass, but my app
crashes.
I’ve been looking at what it would take to establish a semi-automatic
mapping between specs and mocks. E.g. you could flag specs as
presenting a mock so that a change in the lower level object (and
attendant spec) would change the upstream mock behavior. In theory
this would help the sort of refactoring problems you’re describing,
although there would still be sharp edges.
Earlier versions of Mocha used not to allow you to stub a non-existent
method. I’m intending to re-introduce this with a manual override.
Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.
However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.
Earlier versions of Mocha used not to allow you to stub a non-existent
method. I’m intending to re-introduce this with a manual override.
May I recommend that the default behaviour is that these are ignored,
but that you can explicitly tell mocha to fail or warn you of
expectations for methods that don’t exist?
Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.
However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.
Hear! Hear!
It’s important to understand that mocking as we approach it today
comes from TDD as part of an XP process, which divides tests into
Customer Tests and Programmer Tests (note: the lingo has morphed over
time, but the distinction has not). The idea is that you begin with
customer defined end-to-end tests (that fail miserably at first) and
use those to steer you in the right direction in terms of what objects
to develop. Then you drive the development of those objects with more
focused tests.
In this environment, mocking allows us to keep the programmer tests
focused on small bits of functionality. This makes it much easier to:
develop code when the other pieces it relies on don’t exist yet
understand failures
run the tests fast
The cost is the scenario Alex described: programmer tests all pass,
but a customer test fails. This is a GOOD THING. This is why we have
different levels of testing. If every level of testing exercises
everything in its entire environment, then we really only have one
level of testing, and we lose the unique benefits we intent to reap
from having different levels of testing.
Conversely, if you are not doing any high level testing in addition to
the object-level testing, then you probably shouldn’t be using mocks
at all.
However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.
I’ve only recently started using rpsec and I’m still trying to get my
head
around the entire test suite that needs to be setup.
I think I have a handle on testing models, controllers, and views
seperately, thanx to looking at the generated code for examples and
plenty
of pestering the rspec list. I also have used mocks and stubs for most
of
it.
Is there any tutorial style posts or anything similar that someone could
point me towards for the above “acceptance tests”. I’m assuming these
are
similar to integration tests in standard rails testing, where everything
plays together to prove it out.
I guess I’m a bit concerned that if I try to just combine everything at
the
current level that I am I will balls it up bad, so I’m looking for
pointers.
Cheers
Daniel
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.