Mocks? Really?

OK, so i’ve played a bit with mocks and mock_models in controller and
view tests and i have a question. Is this statement really correct:

“We highly recommend that you exploit the mock framework here rather
than providing real model objects in order to keep the view specs
isolated from changes to your models.”
(http://rspec.rubyforge.org/documentation/rails/writing/views.html in
section ‘assigns’)

I ask because this wonderful declaration passes in my view test even
though the model Project doesn’t have the field ‘synopsis’ yet:

@project1 = mock_model(Project)
@project1.stub!(:id).and_return(1)
@project1.stub!(:name).and_return(“My first project”)
@project1.stub!(:synopsis).and_return(“This is a fantastic new
project”)
assigns[:projects] = [@project1]

it “should show the list of projects with name and synopsis” do
render “/projects/index.rhtml”
response.should have_tag(‘div#project_1_name’, ‘My first project’)
response.should have_tag(‘div#project_1_synopsis’, ‘This is a
fantastic new project’)
response.should have_tag(‘div#project_2_name’, ‘My second project’)
response.should have_tag(‘div#project_2_synopsis’, ‘This is
another fantastic project’)
response.should have_tag(‘a’, ‘This is another fantastic project’)
end

This is handy and keeps the view test isolated from changes to your
models, but is that really the point? What if someone later changes
the model and updates the model tests so that they pass but do not
realize that they’ve then broken the view?

I’m sure i am simply missing the point here or not taking into account
integration testing that would i expect aim to catch such changes, but
somehow i want my view tests to tell me that the views are no longer
going to behave as expected

Thanks

Andy

On Dec 6, 2007 10:56 AM, Andy G. [email protected] wrote:

though the model Project doesn’t have the field ‘synopsis’ yet:
response.should have_tag(‘div#project_1_synopsis’, ‘This is a
fantastic new project’)
response.should have_tag(‘div#project_2_name’, ‘My second project’)
response.should have_tag(‘div#project_2_synopsis’, ‘This is
another fantastic project’)
response.should have_tag(‘a’, ‘This is another fantastic project’)
end

This is handy and keeps the view test isolated from changes to your
models, but is that really the point?

It depends on what you value. If you are doing BDD, then you are
running all of your examples between every change. If you are doing
that, you value fast-running examples.

What if someone later changes
the model and updates the model tests so that they pass but do not
realize that they’ve then broken the view?

I’m sure i am simply missing the point here or not taking into account
integration testing that would i expect aim to catch such changes, but
somehow i want my view tests to tell me that the views are no longer
going to behave as expected

This is matter of mindset. In my view, the view is still behaving as
expected, it’s just that the expectation is wrong. What’s not behaving
correctly is the application, which, as you point out, we would learn
from stories (integration tests).

HTH,
David

Hi!

This is handy and keeps the view test isolated from changes to your
models, but is that really the point?

I was very confused first as well. It didn’t make any point to me and
I’m not using it at all. As far as I know, I take it as an optional
tool to go nuts with views when needed. I will use it when some stuff
in view becomes super important to be there. However as an one-band
project I haven’t feel this need yet.

Second thing is about how you like to develop your stuff. As far as I
know David start with Story -> Views -> controller -> model. I prefer
to go this way: Story -> model/controller -> views. So now you might
guess why specing views are nice thing when you go David’s way
up-to-down.

Anyhow, mocking in controllers (and in views) makes much more sense
now with story runner in the big picture. General stuff ‘does it work
at all’ goes to story runner and specific low level stuff goes to
spec. So it’s up to you if you care about low level stuff in views.

One thing what I still don’t like so much is that rspec “force” you to
develop things super vertically or otherwise your mocks will be out of
sync very quickly. Correct me if I’m wrong !!!

Oki,
Priit

PS. somehow autotest does not pick up stories to run. I haven’t yet
investigate it why.

On 6 Dec 2007, at 16:56, Andy G. wrote:

This is handy and keeps the view test isolated from changes to your
models, but is that really the point?

Yes, that’s part of the whole idea of using mocks. (Similarly, two
interacting models will be isolated from each other’s implementation.)

What if someone later changes
the model and updates the model tests so that they pass but do not
realize that they’ve then broken the view?

This is an engineering problem. For example, perhaps you could have
one helper that’s used to manufacture mock Projects, and use it in the
view specs and the model specs; changes to the attributes of Project
(and the corresponding changes to its spec) will then require that
helper to be updated so that the model specs still pass, and the view
spec will fail accordingly.

somehow i want my view tests to tell me that the views are no longer
going to behave as expected

Your view specs tell you that your views will behave as expected
under the assumption that your model objects behave as expected.
That assumption needs to be checked elsewhere (in the model spec).

Cheers,
-Tom

Thanks for all the feedback. Personally, i am working outside in, from
views to models, so mocking does have its place. After lots of
trialing, I am confident now that a Factory class can satisfy my need
for using mocks and real models in different places. I define the
characteristics of an intended model in the factory and ask it to
return either a mock_model or a real one depending on my specific
need. Once I’ve used in anger, I’ll mail details of my implementation
and experiences.

Although I have played with story runner, I have yet to decide how
I’ll fit that into my development process. In fact, I love story
runner, it’s just I am not sure how much time I can afford to assign
to tests on client work whilst I am still getting up to speed.

As a note, I recently wrote a functional spec document for a client
using the Given, When, Then approach for each use case, and the client
loved it! It is a very clear way if writing specs.

Andy

On Dec 7, 2007 8:30 PM, Priit T. [email protected] wrote:

One thing what I still don’t like so much is that rspec “force” you to
develop things super vertically or otherwise your mocks will be out of
sync very quickly. Correct me if I’m wrong !!!

RSpec doesn’t force you to anything at all. However, the agile
approach tends to be vertical slices in short iterations. Working
outside-in, using mocks, etc all ties in with that thinking.

But rspec is certainly not going to throw errors at you if you decide
to write your entire model layer first.

Oki,
Priit

PS. somehow autotest does not pick up stories to run. I haven’t yet
investigate it why.

This is by design. Autotest supports the TDD process - rapid
iterations of red/green/refactor. Having them run your stories too
would slow things down considerably.

I prefer the mantra “mock roles, not objects”, in other words, mock
things
that have behaviour (services, components, resources, whatever your
preferred term is) rather than stubbing out domain objects themselves.
If
you have to mock domain objects it’s usually a smell that your domain
implementation is too tightly coupled to some infrastructure.

The rails community is the first place I’ve encountered stubbing domain
objects as a norm (and in fact as an encouraged “best practice”). It
seems
to be a consequence of how tightly coupled the model classes are to the
database. I don’t use rails in anger, and in the other technologies and
frameworks I use (in Java, .Net or Ruby) I never mock the domain model.
It
seems unwieldy and overly verbose to me to have to stub properties on a
model class.

I usually use a builder pattern:

cheese = CheeseBuilder.to_cheese # with suitable default values for
testing

The builder has lots of methods that start “with_” and return the
builder
instance, so you can train-wreck the properties:

another_cheese = CheeseBuilder.with_type
(:edam).with_flavour(:mild).to_cheese
toastie = ToastieBuilder.with_cheese(cheese).to_toastie # composing
domain
objects

In applications with a database, I then have a very thin suite of low
level
integration tests that use “real” domain objects, wired up to a real
database to verify the behaviour at that level. Of course this is both
slow
and highly dependent on infrastructure, so I am careful to keep the
integration examples separate from the interaction-based ones that I can
isolate.

Maybe it’s the way rails encourages you to write apps - where it’s
mostly
about getting data from a screen to a database and back again - that
people
are more tolerant of such highly-coupled tests. For myself, I use
builders
for domain objects and mocks for service dependencies whenever I can,
and
have a minimal suite of integration tests that require everything to be
wired together. Using fixtures and database setup for regular
behavioural
examples smacks of data-oriented programming to me, and stubbing domain
objects feels like solving the wrong problem.

Cheers,
Dan

Pat.

I’m going to reply by promising to reply. You’ve asked a ton of really
useful and insightful questions. I can’t do them justice without sitting
down and spending a bunch of time thinking about them.

I’m going to be off the radar for a bit over Christmas - I’ve had an
insane
year and I’ve promised myself (and my wife) some quiet time. Your
questions
have a little star next to them in my gmail inbox, which means at the
very
least they’ll be ignored less than the other mail I have to respond to
:slight_smile:

The one sentence response, though, is that I honestly don’t know (which
is
why I need to think about it). I can tell you I think I isolate
services
from their dependencies using mocks, I think I never stub domain
objects
(I definitely never mock them, but stubbing them is different), I can’t
say
how I test layers because I think we have a different definition of
layers.

The reason I’m being being so vague is that I usually specify behaviour
from
the outside in, starting with the “outermost” objects (the ones that
appear
in the scenario steps) and working inwards as I implement each bit of
behaviour. That way I discover service dependencies that I introduce as
mocks, and other domain objects that become, well, domain objects. Then
there are other little classes that fall out of the mix that seem to
make
sense as I go along. I don’t usually start out with much more of a
strategy
than that. I can’t speak as a tester because I’m not one, so I can’t
really
give you a sensible answer for how isolated my tests are. I simply don’t
have tests at that level. At an acceptance level my scenarios only ever
use
real objects wired together doing full end-to-end testing. Sometimes
I’ll
swap in a lighter-weight implementation (say using an in-memory database
rather than a remote one, or an in-thread Java web container like Jetty
rather than firing up Tomcat), but all the wiring is still the same (say
JDBC or HTTP-over-the-wire). I’m still not entirely sure how this maps
to
Rails, but in Java MVC web apps I would want the controller examples
failing if the model’s behaviour changed in a particular way, so I can’t
think of a reason why I would want fake domain objects.

Like I said, I’ll have a proper think and get back to you.

Cheers,
Dan

On Dec 8, 2007 4:06 AM, Dan N. [email protected] wrote:

I prefer the mantra “mock roles, not objects”, in other words, mock things
that have behaviour (services, components, resources, whatever your
preferred term is) rather than stubbing out domain objects themselves. If
you have to mock domain objects it’s usually a smell that your domain
implementation is too tightly coupled to some infrastructure.

Assuming you could easily write Rails specs using the real domain
objects, but not hit the database, would you “never” mock domain
objects (where “never” means you deviate only in extraordinary
circumstances)? I’m mostly curious in the interaction between
controller and model…if you use real models, then changes to the
model code could very well lead to failing controller specs, even
though the controller’s logic is still correct.

What is your opinion on isolating tests? Do you try to test each
class in complete isolation, mocking its collaborators? When you use
interaction-based tests, do you always leave those in place, or do you
substitute real objects as you implement them (and if so, do your
changes move to a more state-based style)? How do you approach
specifying different layers within an app? Do you use real
implementations if there are lightweight ones available, or do you
mock everything out?

I realize that’s a ton of questions…I’d be very grateful (and
impressed!) if you took the time to answer any of them. Also I’d love
to hear input from other people as well.

Pat

Francis and Pat probably know my thoughts on this, already, but as
far as I can see it, mocks (at least the message based ones) are
popular for one reason in the rails / active-record world:

Speed. Mocks are extremely fast. I don’t think it’s uncommon for
those who write specs for rails projects to have a full test suite
running in under 20 seconds if they are mocking all dependencies.
Primarily, this means using mocks for all associations on a Rails
model, and using only mocks for controller specs.

The issue of speed seems secondary, but I can already tell how costly
a long build cycle is. At work our test suite has about 1800 specs,
and takes around 3 minutes to run (and hits the database in all model
specs). My coworker actually released something into production on
Friday before he left which failed the build. Obviously - this has a
serious effect on the end user, who is currently receiving errors.
If the test suite took 20 seconds to run, he would be running it all
the time, and this error would have never occurred. The fact that
they don’t run quickly means that he isn’t going to run them all the
time, and will need to rely on a CI server to beep or do something
obnoxious like that (which isn’t an option in the shared office space
in which we are currently working).

Plus - let’s be honest: We use tests as our feedback loop. The
tighter we can get this, the closer we can stay to the code. When the
specs take over a few minutes to run, we no longer have the luxury of
running the whole suite every time we make a little one line change.
We are forced to run specs from one file, or a subset of the specs
from one file, and we loose a certain amount of feedback on how our
code is integrating.

Yes - there are other reasons for using mocks - like defining
interfaces that don’t exist (or may not exist for a long time); So
far the main reason I’ve seen them used is for speed. I think this
is a major problem with activerecord - and one which only be solved
only by moving to an ORM with a different design pattern (like a true
DataMapper). Lafcadio would probably be in the running for this sort
of thing - a library which can isolate the database and the API with
a middle layer, which can easily be mocked either by a mock library
which could be a drop in replacement for the database, or a middle
layer which could easily be stubbed out with message based stubs.

As always, it’s good to hear this sort of discussion going on.
(Francis: It was exactly this sort of discussion that got me
involved with RSpec in the first place)

Regards,

Scott

On Dec 16, 2007 7:43 PM, Scott T. [email protected]
wrote:

Francis and Pat probably know my thoughts on this, already, but as
far as I can see it, mocks (at least the message based ones) are
popular for one reason in the rails / active-record world:

Speed. Mocks are extremely fast. I don’t think it’s uncommon for
those who write specs for rails projects to have a full test suite
running in under 20 seconds if they are mocking all dependencies.
Primarily, this means using mocks for all associations on a Rails
model, and using only mocks for controller specs.
My experience with AR is that AR itself (mainly object instantiation)
is slow, not the queries.
Mocking the queries did not result in a worthwhile test run time
savings.
Rails creates lots of objects, which causes lots of slowness. Its
death by a thousand cuts.

I guess one could mock out the entire AR object, but I’m not convinced
that it would result in large performance benefits in many cases.
I’ve tried doing this a couple of times and did not save much time at
all. Of course, this was done in view examples on a project that uses
Markaby (which is slow).

Whatever you do, I recommend taking performance metrics of your suite
as you try to diagnose the slowness. The results will probably be
surprising.

time, and will need to rely on a CI server to beep or do something

Yes - there are other reasons for using mocks - like defining
interfaces that don’t exist (or may not exist for a long time); So
far the main reason I’ve seen them used is for speed. I think this
is a major problem with activerecord - and one which only be solved
only by moving to an ORM with a different design pattern (like a true
DataMapper). Lafcadio would probably be in the running for this sort
of thing - a library which can isolate the database and the API with
a middle layer, which can easily be mocked either by a mock library
which could be a drop in replacement for the database, or a middle
layer which could easily be stubbed out with message based stubs.
One thing about AR that hurts is all of the copies of the same record.
It would be really nice if there was only one instance of each record
in the thread.
This would help performance and significantly reduce the need to
reload the object.

Coming to this thread a bit late:

I think I’m pretty close to Dan, in practice: I’m not a big fan of
fine-grained isolation in writing your tests. The practice seems to
me like it would just bug you down. When I’m writing a behavior for a
particular thing, such as a controller, I don’t want to have to worry
about the precise messages that are passed to its collaborators. I
try to think in a fairly “black box” manner about it: Presupposing
that there’s a given document in a database table, when I make an
HTTP request that’s looking for that document, I should get that
document in such-and-such a format. Ideally I wouldn’t specify too
much whether the controller hits Document.find or
Document.find_by_sql or gets it out of some disk cache or gets the
data by doing a magical little dance in a faerie circle off in the
forest. It’s really not my test’s problem.

On the other hand, I do think mocking is extremely useful when you’re
dealing with very externalized services with narrow, rigid interfaces
that you can’t implicitly test all the time. At work I have to write
lots of complex tests around a specific web service, but I don’t have
a lot of control over it, so I wrote a fairly complex mock for that
service. But even then it’s a different sort of mock: It’s more state-
aware than surface-aware, which is part of the point as I see it. Of
course, writing those sorts of mocks is much more time-consuming.

If you haven’t seen it before, Martin F. has a pretty good
article about the differences in styles: http://martinfowler.com/
articles/mocksArentStubs.html

Francis H.

For me, this has certainly been the most enjoyable and interesting
part of using RSpec - finding answers to these questions in a context
that suits the project. Of course, i am new to this, but have found an
approach that works well for my current project. However, my approach
is wide open to review and improvement and will no doubt evolve well
beyond its current scope in future.

I am still reading and re-reading Dan’s previous mail regarding the
Builder pattern as it is very elegant, although i am not using now as
i feel that it could introduce a little too much overhead to maintain
the Builders. I am also considering Dan’s mantra and with more RSpec
experience i’ll gain a better insight into how this can work in rails.

Here’s what i’m doing that works well for our project:

Mocking

  • In views and controllers I always use mocks and stub out any
    responses that i spot or are flagged up by autotest (yep, autotest is
    ace for highlighting those methods that need stubbing)
  • In models, i only use real models for the model specific to the
    test. I always mock all other interacting models - so in a test for a
    project with many tasks, the tasks model is mocked and then stubbed.
  • I only define the expectation for any mock or real model in one
    place. So, in our app, the expected definition of a Project is defined
    once and that definition is used by all tests that use that object,
    from views to models. It’s default values can be overwritten, but the
    expectation is set for all uses. More info below:

Factories
So far, I am finding that a factory class is offering a useful glue
between the intentionally separated unit tests. So, even though all
tests are isolated from each other with mocks, they still share an
expectation of what any used mock should look like. This enables me to
be aware of the system wide impact of a change to a small component of
the system. I fully accept that this should be covered by integration
testing and not unit testing, but on a quick project, i am not sure i
can justify (not yet at least) the time to write unit tests and then
integration tests, especially as the test team will go at the app with
Scellenium. As i say, mine is an evolving platform :slight_smile:

Here’s how we are using a factory. I hope it helps and that i don’t
get too grilled for the design and implementation :slight_smile:

The factory class houses the expected

definition of each object and returns

mocks or real models depending on the request

It’s attribute values (but not keys) can be overwritten

See ‘validate_attributes’ method below

module Factory
def self.create_project(attributes = {}, mock = false)
@default_attributes = {
:name => “Mock Project”,
:synopsis => “Mock Project Synopsis”
}
create_object(attributes,mock,Project)
end

private

def self.create_object(custom_attributes,mock,object_type)
validate_attributes(custom_attributes)
attributes = @default_attributes.merge(custom_attributes)
if mock
attributes.each_pair do |key, value|
mock.stub!(key).and_return(value)
end
mock
else
mock = object_type.create attributes
end
end

The following method validates that any received

custom attribute’s key is in the expected attribute

list for the object. If not, the test fails, forcing

the developer to keep the factory defaults up to

date with any changes

def self.validate_attributes(attributes)
attributes.each_key {|a| raise “Unrecognised attribute ‘#{a}’ was
passed into the Factory” if !@default_attributes.has_key?(a)}
true
end
end

Projects controller test interacts

with the Factory and receives mocks

describe ProjectsController do
include Factory
before(:each) do
@project1 = Factory.create_project({}, mock_model(Project)),
@project2 = Factory.create_project({:name => “My second project”,
:synopsis => “This is another fantastic project”},
mock_model(Project))
@projects = [@project1,@project2]
end
end

The Project model test interacts with

the Factory and receives real models

describe Project do
include Factory

before(:each) do
Project.destroy_all
#Real project
@project = Factory.create_project
#Stub Role
@role = Factory.create_role({},mock_model(Role))
@role.stub!(:quoted_id).and_return(true)
@role.stub!(:[]=).and_return(true)
@role.stub!(:save).and_return(true)
end
end

On Dec 17, 2007 5:10 AM, Andy G. [email protected] wrote:

I am also considering Dan’s mantra

Dan’s mantra of “mock roles, not objects” comes from
http://www.jmock.org/oopsla2004.pdf, a paper of the same name.

My read on this differs from Dan’s a bit. I’ll follow up on that
later, but you might want to give it a read and form your own opinion
before I poison you with mine :slight_smile:

On Dec 17, 2007 11:02 AM, Scott T. [email protected]
wrote:

death by a thousand cuts.

Certainly. A lesson in premature optimization. Although, I did
notice that
my test suite took about half the time with an in-memory sqllite3
database,
so I would find it hard to believe that most of the time is spent in
object
creation - but…off to do some benchmarking.
True. I also did some custom fixture optimizations. For some reason,
instantiating a Fixture object instance is very slow. I’ve rigged it
so there is only one instance of a Fixture object for each table for
the entire process.
Of course this would break fixture scenarios.

I’ve had around 20-30% increases using in memory sqllite, about 1 year
ago. I havn’t tried it since.

On Dec 17, 2007, at 3:25 AM, Brian T. wrote:

Primarily, this means using mocks for all associations on a Rails
I’ve tried doing this a couple of times and did not save much time at
all. Of course, this was done in view examples on a project that uses
Markaby (which is slow).

Whatever you do, I recommend taking performance metrics of your suite
as you try to diagnose the slowness. The results will probably be
surprising.

Certainly. A lesson in premature optimization. Although, I did
notice that
my test suite took about half the time with an in-memory sqllite3
database,
so I would find it hard to believe that most of the time is spent in
object
creation - but…off to do some benchmarking.

Scott

On Dec 17, 2007 3:02 PM, Brian T. [email protected] wrote:

far as I can see it, mocks (at least the message based ones) are
savings.
as you try to diagnose the slowness. The results will probably be
instantiating a Fixture object instance is very slow. I’ve rigged it
so there is only one instance of a Fixture object for each table for
the entire process.
Of course this would break fixture scenarios.

Did you do that in rspec? Or in your own project?

On Dec 17, 2007 1:08 PM, David C. [email protected] wrote:

is slow, not the queries.

creation - but…off to do some benchmarking.
True. I also did some custom fixture optimizations. For some reason,
instantiating a Fixture object instance is very slow. I’ve rigged it
so there is only one instance of a Fixture object for each table for
the entire process.
Of course this would break fixture scenarios.

Did you do that in rspec? Or in your own project?
My own project.
I overrode Test::Unit::TestCase @@already_loaded_fixtures with a shim.

True. I also did some custom fixture optimizations. For some reason,
instantiating a Fixture object instance is very slow. I’ve rigged it
so there is only one instance of a Fixture object for each table for
the entire process.
Of course this would break fixture scenarios.

I’ve had around 20-30% increases using in memory sqllite, about 1 year
ago. I havn’t tried it since.

Interesting. I’m not using Fixtures, so I guess this isn’t an option
for me. (I need to figure out a way to speed up FixtureReplacement).

What was so slow in the fixture instantiation?

Scott

On Dec 17, 2007 1:29 PM, Scott T. [email protected]
wrote:

Interesting. I’m not using Fixtures, so I guess this isn’t an option
for me. (I need to figure out a way to speed up FixtureReplacement).

What was so slow in the fixture instantiation?
I didn’t isolate what about fixture instantiation was slow. It reads
the yaml files and converts the hash into objects.
All I know is when I did the optimization, I got around a 30%
performance increase when loading all fixtures in all Examples.