RSpec - concerns about mocking

I’m getting into RSpec more after getting the awesome Peepcode
screencast about it. So far I love it, but coming form Test::Unit with
Rails, I have some concerns about the recommendation to mock models and
database calls in controller and view specs.

Lets say you have a method on your User model called
User.find_activated, and you have models specs for it. Then in your
controller specs you mock it up like so:

@user = mock_model(User)
User.stub!(:find_activiated).and_return(@user)

Everything works great, and specs execute fast due to limited DB access.

What if I then refactor, and decide to rename the find_activated method
to a more descriptive User.find_all_activated ? So I change the model
spec to reflect the new name, then change the method name in the model,
and all specs pass. Awesome.

So I hit my app and it crashes. I get “no method find_activated for
User”. My controller specs are mocking a method that no longer exists
in the real app. As a result, my specs pass, but my app crashes.

This would not have happened without the mocks, as the spec would have
failed since it called a non-existent method.

So, while I get the benefit of mocking (speed and isolation), it seems
like a gateway for passing specs and failing production applications.

Am I wrong? Am I going about mocking wrong?

On Jul 16, 2007, at 1:22 PM, Alex W. wrote:

What if I then refactor, and decide to rename the find_activated
method to a more descriptive User.find_all_activated ? So I change
the model spec to reflect the new name, then change the method name
in the model, and all specs pass. Awesome.

So I hit my app and it crashes. I get “no method find_activated
for User”. My controller specs are mocking a method that no longer
exists in the real app. As a result, my specs pass, but my app
crashes.

I’ve been looking at what it would take to establish a semi-automatic
mapping between specs and mocks. E.g. you could flag specs as
presenting a mock so that a change in the lower level object (and
attendant spec) would change the upstream mock behavior. In theory
this would help the sort of refactoring problems you’re describing,
although there would still be sharp edges.

-faisal

Earlier versions of Mocha used not to allow you to stub a non-existent
method. I’m intending to re-introduce this with a manual override.

Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.

However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.


James.

On 7/16/07, James M. [email protected] wrote:

Earlier versions of Mocha used not to allow you to stub a non-existent
method. I’m intending to re-introduce this with a manual override.

May I recommend that the default behaviour is that these are ignored,
but that you can explicitly tell mocha to fail or warn you of
expectations for methods that don’t exist?

Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.

However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.

Hear! Hear!

It’s important to understand that mocking as we approach it today
comes from TDD as part of an XP process, which divides tests into
Customer Tests and Programmer Tests (note: the lingo has morphed over
time, but the distinction has not). The idea is that you begin with
customer defined end-to-end tests (that fail miserably at first) and
use those to steer you in the right direction in terms of what objects
to develop. Then you drive the development of those objects with more
focused tests.

In this environment, mocking allows us to keep the programmer tests
focused on small bits of functionality. This makes it much easier to:

  • develop code when the other pieces it relies on don’t exist yet
  • understand failures
  • run the tests fast

The cost is the scenario Alex described: programmer tests all pass,
but a customer test fails. This is a GOOD THING. This is why we have
different levels of testing. If every level of testing exercises
everything in its entire environment, then we really only have one
level of testing, and we lose the unique benefits we intent to reap
from having different levels of testing.

Conversely, if you are not doing any high level testing in addition to
the object-level testing, then you probably shouldn’t be using mocks
at all.

Cheers,
David

On 7/17/07, James M. [email protected] wrote:

However, in the end there’s no substitute for acceptance tests that
exercise critical business functionality.

I’ve only recently started using rpsec and I’m still trying to get my
head
around the entire test suite that needs to be setup.

I think I have a handle on testing models, controllers, and views
seperately, thanx to looking at the generated code for examples and
plenty
of pestering the rspec list. I also have used mocks and stubs for most
of
it.

Is there any tutorial style posts or anything similar that someone could
point me towards for the above “acceptance tests”. I’m assuming these
are
similar to integration tests in standard rails testing, where everything
plays together to prove it out.

I guess I’m a bit concerned that if I try to just combine everything at
the
current level that I am I will balls it up bad, so I’m looking for
pointers.

Cheers
Daniel